File Storage NAS (NAS) supports dynamic provisioning, where the Container Storage Interface (CSI) plug-in automatically creates persistent volumes (PVs) based on persistent volume claims (PVCs) and StorageClasses. Use dynamic provisioning for workloads that need shared, persistent file storage across multiple pods — such as big data analysis, log aggregation, and web application serving.
Prerequisites
Before you begin, make sure that:
The CSI plug-in is installed in the cluster. To update csi-plugin and csi-provisioner, see Update csi-plugin and csi-provisioner.
NAS is activated. If this is your first time using NAS, activate it on the NAS product page.
If your cluster uses FlexVolume, upgrade to CSI first — FlexVolume is deprecated. See Upgrade from FlexVolume to CSI. To check which storage component your cluster uses, go to Operations > Add-ons and click the Storage tab.
Limitations
NAS file systems using the Server Message Block (SMB) protocol cannot be mounted.
All pods sharing a NAS file system must be in the same virtual private cloud (VPC). Cross-VPC mounting is not supported.
NAS file systems can be mounted across zones within the same VPC.
Choose a mount mode
Mode | Use when | Pre-created NAS required | PV maps to |
Subpath | Multiple apps or pods share the same NAS file system, or different pods need different subdirectories | Yes | A subdirectory of the NAS file system |
Sharepath | Pods in different namespaces must share the same NAS path | Yes | The same NAS directory (no new subdirectory created) |
FilesystemNAS console | Your app needs to dynamically create and delete NAS file systems and mount targets | No (CSI auto-creates) | A dedicated NAS file system |
Filesystem mode supports kubectl only. The ACK console is not supported for this mode.
Step 1: Create a NAS file system and a mount target
Skip this step if you are using filesystem mode. The CSI plug-in creates the NAS file system and mount target automatically.
NAS file system types vary by region and zone. Select the type and zone that match your cluster region, VPC, and vSwitch.
For specifications, performance, billing, and region availability, see General-purpose NAS file systems and Extreme NAS file systems.
For mount connectivity limits and protocol support, see Limits.
Log on to the NAS console.
On the File System List page, click Create File System, then select Create General-purpose NAS File System or Create Extreme NAS File System.
Configure the file system parameters and click Buy Now. The following table describes the key parameters. For the full parameter list, see Create a file system.
Parameter
Description
Region
Select the region where your cluster is located.
Zone
Select a zone. NAS can be mounted across zones within the same VPC. Select a single zone for best performance.
Protocol type
Select NFS. SMB is not supported for Kubernetes mounting.
VPC and vSwitch
(General-purpose NAS only) Select the VPC and vSwitch used by pods in your ACK cluster.
Step 2: Mount a dynamically provisioned NAS volume
Subpath mode
Use subpath mode when multiple applications or pods need to share a NAS file system, or when different pods need access to different subdirectories of the same file system.
Each PVC creates a new subdirectory under the path specified in the StorageClass. The PV corresponds to that subdirectory.
Use kubectl
1. Create a StorageClass
Create
alicloud-nas-subpath.yamlwith the following content, and update the parameters for your environment:Parameter
Description
Default
allowVolumeExpansion(General-purpose NAS only) Set to
trueto enable NAS directory quotas on dynamically provisioned PVs, allowing you to expand the volume by modifying the PVC. The quota takes effect asynchronously and may be briefly exceeded under heavy writes. For details, see Manage directory quotas.—
mountOptionsNFS mount options, including the NFS version.
—
volumeAsThe mount mode. Set to
subpath.—
serverThe mount target domain and base path. Replace with your actual mount target. To find the domain name, see Manage mount targets.
/(root)provisionerThe CSI driver. Must be
nasplugin.csi.alibabacloud.com.—
reclaimPolicyControls what happens to PV data when a PVC is deleted.
DeletearchiveOnDelete(When
reclaimPolicy: Delete) Controls whether PV data is deleted or archived.true: renames the subdirectory toarchived-{pvName}.{timestamp}.false: permanently deletes the data.ImportantDo not set this to
falseduring high-traffic periods. To delete backend data, you must setarchiveOnDelete: falseusing kubectl — the console cannot configure this parameter.trueallowVolumeExpansion: true apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: alicloud-nas-subpath mountOptions: - nolock,tcp,noresvport - vers=3 parameters: volumeAs: subpath server: "0cd8b4a576-g****.cn-hangzhou.nas.aliyuncs.com:/k8s/" provisioner: nasplugin.csi.alibabacloud.com reclaimPolicy: RetainCreate the StorageClass:
kubectl create -f alicloud-nas-subpath.yaml
2. Create a PVC
Create
pvc.yamlwith the following content:Parameter
Description
Default
accessModesVolume access mode.
ReadWriteManystorageClassNameThe StorageClass to use.
—
storageThe requested capacity. This does not limit actual usage unless
allowVolumeExpansion: trueis set on the StorageClass and the NAS file system is general-purpose. The quota is measured in GiB and rounded up to the nearest integer. For volume expansion, see Expand a dynamically provisioned NAS volume.—
kind: PersistentVolumeClaim apiVersion: v1 metadata: name: nas-csi-pvc spec: accessModes: - ReadWriteMany storageClassName: alicloud-nas-subpath resources: requests: storage: 20GiCreate the PVC:
kubectl create -f pvc.yaml
3. Deploy applications and verify
This example deploys two nginx applications that share the same NAS subdirectory.
Both applications reference the same PVC (
nas-csi-pvc).Deploy both applications:
kubectl create -f nginx-1.yaml -f nginx-2.yamlVerify the pods are running:
/k8s/: the base path from the StorageClassnas-79438493-f3e0-11e9-bbe5-00163e09****: the automatically created PV name
kubectl get podExpected output:
NAME READY STATUS RESTARTS AGE deployment-nas-1-5b5cdb85f6-a**** 1/1 Running 0 32s deployment-nas-2-c5bb4746c-b**** 1/1 Running 0 32sBoth pods mount the same NAS subdirectory at
/data. The mount target path follows the pattern<mount-target>:/k8s/<pv-name>/:
To mount different subdirectories to different pods, create a separate PVC for each pod.
Use the ACK console
1. Create a StorageClass
Log on to the ACK console. In the left-side navigation pane, click ACK consoleClusters.
Click the cluster name, then choose Volumes > StorageClasses in the left-side pane.
On the StorageClasses page, click Create.
Configure the parameters and click Create.
Parameter
Description
Example
Name
The StorageClass name. Follow the format requirements displayed in the console.
alicloud-nas-subpathPV type
Select NAS.
NAS
Select mount target
The mount target domain and path. If no mount target exists, create a NAS file system first. See Step 1. To find the domain name, see Manage mount targets.
0cd8b4a576-g****.cn-hangzhou.nas.aliyuncs.com:/k8s/Volume mode
Select Subdirectory for subpath mode. Each PVC creates a new subdirectory under the base path.
NoteSubdirectory mode requires CSI plug-in version 1.31.4 or later. Earlier versions use Shared Directory (sharepath) mode.
Subdirectory
Mount path
The base path in the NAS file system. If the subdirectory does not exist, it is created automatically. Leave blank to use the root directory. For Extreme NAS, set this to a subdirectory of
/share, such as/share/data./Reclaim policy
Controls what happens to data when a PVC is deleted. Use Retain for high data-security requirements.
ImportantThe
archiveOnDeleteparameter cannot be configured through the console. To delete backend data, use kubectl. In ACK Serverless clusters, the Delete policy does not delete NAS directories because CSI-Provisioner lacks the required Linux privileges.Retain
Mount options
NFS mount options. Use NFS v3 (recommended). Extreme NAS supports only NFS v3. See NFS.
(default)
2. Create a PVC
In the left-side pane, choose Volumes > Persistent Volume Claims, then click Create.
Configure the parameters and click Create.
Parameter
Description
Example
PVC type
Select NAS.
NAS
Name
A unique name for the PVC within the cluster.
pvc-nasAllocation mode
Select Use StorageClass.
Use StorageClass
Existing storage class
Select the StorageClass created in the previous step.
alicloud-nas-subpathCapacity
The requested storage capacity. Does not limit actual usage unless directory quotas are enabled. See Expand a dynamically provisioned NAS volume.
20Gi
Access mode
The volume access mode.
ReadWriteMany
3. Deploy an application
In the left-side pane, choose ACK ClustersACK ClustersWorkloads > Deployments, then click Create from Image.
Configure the key parameters. Use defaults for all other settings. For full configuration details, see Create a stateless application by using a Deployment.
Section
Parameter
Description
Example
Basic information
Name
A custom name for the Deployment.
deployment-nas-1Replicas
Number of pod replicas.
2
Container
Image name
The container image address.
anolis-registry.cn-zhangjiakou.cr.aliyuncs.com/openanolis/nginx:1.14.1-8.6Required resources
vCPU and memory.
0.25 vCores, 512 MiB
Volume
Add PVC
Select Mount Source (the PVC you created) and Container Path (the mount path in the container).
Mount Source:
pvc-nas; Container Path:/dataOn the Deployments page, click the application name. On the Pods tab, verify the pod is in the Running state.
Sharepath mode
Use sharepath mode when pods in different namespaces need to access the same NAS path. All PVs created from the same StorageClass map to the same NAS directory — no new subdirectory is created per PVC.
The reclaim policy must be Retain for sharepath mode.
Use kubectl
1. Create a StorageClass
Create
alicloud-nas-sharepath.yaml:Parameter
Description
mountOptionsNFS mount options. Use NFS v3 (recommended). Extreme NAS supports only NFS v3. See NFS.
volumeAsSet to
sharepath.serverThe mount target and path. All PVs share this path. Replace with your actual mount target. See Manage mount targets.
provisionerMust be
nasplugin.csi.alibabacloud.com.reclaimPolicyMust be
Retainfor sharepath mode.apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: alicloud-nas-sharepath mountOptions: - nolock,tcp,noresvport - vers=3 parameters: volumeAs: sharepath server: "0cd8b4a576-g****.cn-hangzhou.nas.aliyuncs.com:/data" provisioner: nasplugin.csi.alibabacloud.com reclaimPolicy: RetainCreate the StorageClass:
kubectl create -f alicloud-nas-sharepath.yaml
2. Create PVCs in different namespaces
Create the namespaces:
kubectl create ns ns1 kubectl create ns ns2Create
pvc.yamlwith PVCs in both namespaces:In sharepath mode, the
storageparameter does not take effect.kind: PersistentVolumeClaim apiVersion: v1 metadata: name: nas-csi-pvc namespace: ns1 spec: accessModes: - ReadWriteMany storageClassName: alicloud-nas-sharepath resources: requests: storage: 20Gi --- kind: PersistentVolumeClaim apiVersion: v1 metadata: name: nas-csi-pvc namespace: ns2 spec: accessModes: - ReadWriteMany storageClassName: alicloud-nas-sharepath resources: requests: storage: 20GiCreate the PVCs:
kubectl create -f pvc.yaml
3. Deploy applications and verify
Deploy the applications:
kubectl create -f nginx.yamlVerify the pods are running:
kubectl get pod -A -l app=nginxExpected output:
NAMESPACE NAME READY STATUS RESTARTS AGE ns1 nginx-5b5cdb85f6-a**** 1/1 Running 0 32s ns2 nginx-c5bb4746c-b**** 1/1 Running 0 32sBoth pods mount
0cd8b4a576-g****.cn-hangzhou.nas.aliyuncs.com:/dataat/data, even though they are in different namespaces.
Use the ACK console
1. Create a StorageClass
Log on to the ACK console. Click the cluster name, then choose Volumes > StorageClasses.
Click Create and configure the parameters.
Parameter
Description
Example
Name
The StorageClass name.
alicloud-nas-sharepathPV type
Select NAS.
NAS
Select mount target
The mount target domain and path. See Manage mount targets.
0cd8b4a576-g****.cn-hangzhou.nas.aliyuncs.com:/dataVolume mode
Select Shared Directory. All PVs share the same NAS path.
Shared Directory
Mount path
The path within the NAS file system. Leave blank to use the root. For Extreme NAS, use a subdirectory of
/share./Reclaim policy
Must be Retain for sharepath mode.
Retain
Mount options
NFS mount options. Use NFS v3.
(default)
Click Create.
2. Create PVCs in different namespaces
Create the ns1 and ns2 namespaces. See Manage namespaces and resource quotas.
In the left-side pane, choose Volumes > Persistent Volume Claims. Select ns1 in the Namespace section and click Create.
Configure the parameters and click Create.
Parameter
Description
Example
PVC type
Select NAS.
NAS
Name
A unique name within the cluster.
pvc-nasAllocation mode
Select Use StorageClass.
Use StorageClass
Existing storage class
Select the alicloud-nas-sharepath StorageClass.
alicloud-nas-sharepathCapacity
The requested capacity (does not take effect in sharepath mode).
20Gi
Access mode
The volume access mode.
ReadWriteMany
Repeat the steps to create
pvc-nasin the ns2 namespace.
3. Deploy an application
In the left-side pane, choose Workloads > Deployments. Select ns1 in the Namespace section and click Create from Image.
Configure the parameters.
Section
Parameter
Description
Example
Basic information
Name
Deployment name.
nginxReplicas
Number of replicas.
2
Container
Image name
Container image.
anolis-registry.cn-zhangjiakou.cr.aliyuncs.com/openanolis/nginx:1.14.1-8.6Required resources
vCPU and memory.
0.25 vCores, 512 MiB
Volume
Add PVC
Mount Source: the PVC in ns1. Container Path: the mount path.
Mount Source:
pvc-nas; Container Path:/dataRepeat for the ns2 namespace.
On the Deployments page, click the application name. On the Pods tab, verify pods are in the Running state.
Filesystem mode
Use filesystem mode when your application needs to create and delete NAS file systems and mount targets dynamically. When a PVC is created, CSI automatically creates a NAS file system and mount target. When the PVC is deleted, both are deleted if the reclaim policy is set to Delete and deleteVolume is true.
By default, deleting a PV in filesystem mode retains the NAS file system and mount target. To delete them automatically, set reclaimPolicy: Delete and deleteVolume: "true" in the StorageClass.
Only kubectl is supported for filesystem mode.
Step 3: Verify NAS volume behavior
After deploying your application, verify that the NAS volume correctly persists data across pod restarts and shares data across pods.
Verify data persistence
List the running pods:
kubectl get podExpected output:
NAME READY STATUS RESTARTS AGE deployment-nas-1-5b5cdb85f6-a**** 1/1 Running 0 32s deployment-nas-2-c5bb4746c-b**** 1/1 Running 0 32sConfirm the
/datapath is empty in the pod:kubectl exec deployment-nas-1-5b5cdb85f6-a**** -- ls /dataNo output confirms the directory is empty.
Create a test file:
kubectl exec deployment-nas-1-5b5cdb85f6-a**** -- touch /data/nasDelete the pod to trigger a restart:
kubectl delete pod deployment-nas-1-5b5cdb85f6-a****In another terminal, watch the pod restart:
kubectl get pod -w -l app=nginxAfter the pod restarts, verify the file is still there:
kubectl get podExpected output:
NAME READY STATUS RESTARTS AGE deployment-nas-1-5b5cdm2g5-c**** 1/1 Running 0 32s deployment-nas-2-c5bb4746c-b**** 1/1 Running 0 32skubectl exec deployment-nas-1-5b5cdm2g5-c**** -- ls /dataExpected output:
nasThe
nasfile persists after the pod restart, confirming data persistence.
Verify data sharing
Check that both pods see the same empty
/datadirectory:kubectl exec deployment-nas-1-5b5cdb85f6-a**** -- ls /data kubectl exec deployment-nas-2-c5bb4746c-b**** -- ls /dataNo output from either command confirms both directories are empty.
Create a file in one pod:
kubectl exec deployment-nas-1-5b5cdb85f6-a**** -- touch /data/nasVerify both pods see the file:
kubectl exec deployment-nas-1-5b5cdb85f6-a**** -- ls /dataExpected output:
naskubectl exec deployment-nas-2-c5bb4746c-b**** -- ls /dataExpected output:
nasThe file created in one pod is immediately visible in the other, confirming shared storage.
FAQ
How do I enable user or group isolation in the NAS file system?
Set securityContext on your pod to run all containers as a specific user and group. The following example uses the nobody user (UID and GID: 65534):
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: nas-sts
spec:
selector:
matchLabels:
app: busybox
serviceName: "busybox"
replicas: 1
template:
metadata:
labels:
app: busybox
spec:
securityContext:
fsGroup: 65534 # Directories are created as the nobody user (UID/GID 65534)
fsGroupChangePolicy: "OnRootMismatch" # Change permissions only if the root directory ownership does not match
containers:
- name: busybox
image: busybox
command:
- sleep
- "3600"
securityContext:
runAsUser: 65534 # All processes run as nobody (UID 65534)
runAsGroup: 65534 # All processes run as nobody (GID 65534)
allowPrivilegeEscalation: false
volumeMounts:
- name: nas-pvc
mountPath: /data
volumeClaimTemplates:
- metadata:
name: nas-pvc
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: "alicloud-nas-subpath"
resources:
requests:
storage: 100GiVerify the user context in the running container:
kubectl exec nas-sts-0 -- "top"Expected output:
Mem: 11538180K used, 52037796K free, 5052K shrd, 253696K buff, 8865272K cached
CPU: 0.1% usr 0.1% sys 0.0% nic 99.7% idle 0.0% io 0.0% irq 0.0% sirq
Load average: 0.76 0.60 0.58 1/1458 54
PID PPID USER STAT VSZ %VSZ CPU %CPU COMMAND
49 0 nobody R 1328 0.0 9 0.0 top
1 0 nobody S 1316 0.0 10 0.0 sleep 3600Verify files and directories are created as the nobody user:
kubectl exec nas-sts-0 -- sh -c "touch /data/test; mkdir /data/test-dir; ls -arlth /data/"Expected output:
total 5K
drwxr-xr-x 1 root root 4.0K Aug 30 10:14 ..
drwxr-sr-x 2 nobody nobody 4.0K Aug 30 10:14 test-dir
-rw-r--r-- 1 nobody nobody 0 Aug 30 10:14 test
drwxrwsrwx 3 root nobody 4.0K Aug 30 10:14 .Both test and test-dir are owned by the nobody user, confirming user isolation is active.
What's next
References
If you encounter issues when you mount or use NAS volumes, see the following documents for troubleshooting:
You can use CNFS to independently manage NAS file systems, which improves performance and QoS control. For more information, see Manage NAS file systems by using CNFS.