With NAS dynamically provisioned volumes, the system automatically creates and allocates storage space for your workloads on demand. You do not need to pre-create persistent volumes (PVs). This approach meets both data persistence and shared access requirements, such as concurrent read and write access by multiple pods, and simplifies storage management for web applications, log retention, and similar use cases.
How it works
When you create a persistent volume claim (PVC), the system uses the StorageClass specified in the PVC to automatically create a new PV and its corresponding storage volume. This model is more flexible and supports automatic volume expansion.
Mount modes
You can set the mount mode using the volumeAs parameter in the StorageClass. This parameter defines how each PV maps to a NAS file system.
Mount mode | Description | Use case |
Subdirectory mode. Each PV maps to a dedicated subdirectory within one NAS file system. This helps isolate data. |
| |
Shared directory mode. All PVs map to the same NAS directory defined in the StorageClass. All PVCs that reference this StorageClass point to the same shared directory on the NAS file system. | Multiple pods across different namespaces need to mount the same NAS subdirectory. | |
filesystem (not recommended) | File system mode. Each PV maps to a newly created, independent NAS file system instance. | You need strict isolation for performance or security. The system dynamically creates and deletes independent NAS file systems and mount targets for your application. This option incurs higher costs. |
General workflow
The main steps to mount a NAS dynamically provisioned volume are as follows.
|
Preparations
The csi-plugin and csi-provisioner components are installed.
The CSI components are installed by default. Ensure that you have not manually uninstalled them. You can check the installation status on the page. Upgrade the CSI components to the latest version.
If you use the subpath or sharepath mode, you must first create a NAS file system that meets the following conditions. Otherwise, you must create a new file system. For more information, see Create a file system.
NAS has limits on mount connectivity, the number of file systems, and protocol types.
Protocol type: NFS only.
Virtual private cloud (VPC): The NAS file system must be in the same VPC as your cluster. NAS supports cross-zone mounting but does not support cross-VPC mounting.
Mount target: Add a mount target that resides in the same VPC as your cluster and has an active status. For more information, see Manage mount targets. Record the mount target address.
(Optional) Encryption type: To encrypt data on the storage volume, you can configure the encryption type when you create the NAS file system.
Notes
Do not delete the mount target: While the volume is in use, do not delete its corresponding mount target in the NAS console. Doing so can cause node I/O exceptions.
Concurrent writes: NAS is a shared storage service. When multiple pods mount the same volume, the application must handle potential data consistency issues caused by concurrent writes.
For more information about the limits on concurrent writes in NAS, see How do I prevent exceptions that may occur when multiple processes or clients write to the same log file at the same time? and File read and write issues.
Mount performance: If you configure
securityContext.fsgroupin your application, kubelet recursively runs thechmodorchowncommand after mounting. This can significantly increase the pod startup time.For more information about optimization, see NAS Persistent Volume FAQ.
Method 1: Mount using subpath
In this mode, each PVC automatically creates a dedicated subdirectory in the NAS file system to use as its PV.
The version of your CSI component must be 1.31.4 or later. For more information about how to upgrade the component, see Upgrade CSI components.
You have created a NAS file system and obtained the mount target address.
1. Create a StorageClass
A StorageClass acts as a provisioning template for dynamic volumes. It defines where storage resources come from and how they behave.
kubectl
Create a file named
alicloud-nas-subpath.yamlthat has the following content.apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: # StorageClass name. Must be unique in the cluster. name: alicloud-nas-subpath mountOptions: - nolock,tcp,noresvport - vers=3 parameters: # Set to subpath. volumeAs: subpath # Format of the server field: <nas-server-address>:/<path> server: "0cd8b4a576-g****.cn-hangzhou.nas.aliyuncs.com:/k8s" archiveOnDelete: "true" provisioner: nasplugin.csi.alibabacloud.com reclaimPolicy: Retain allowVolumeExpansion: trueParameter
Description
mountOptionsMount options for NAS, including the NFS protocol version. By default, NAS mounts using NFS v3. To specify another version, use
vers=4.0. For supported NFS versions per NAS type, see NFS protocol.parameters.volumeAsMount mode. Set to
subpath.parameters.serverThe NAS mount target address and the subdirectory path to mount. Format:
<nas-server-address>:/<path>.<nas-server-address>: The mount target address of the NAS file system. See Manage mount targets.:/<path>: The NAS subdirectory to mount. If unset or if the subdirectory does not exist, the root directory is mounted by default.General-purpose NAS: Root directory is
/.Extreme NAS: Root directory is
/share. When mounting a subdirectory, thepathmust start with/share(for example,/share/data).
parameters.archiveOnDeleteDetermines whether files and directories in the backend storage are permanently deleted when you delete a PVC. Applies only when
reclaimPolicyisDelete.Because NAS is a shared storage service, a confirmation prompt is provided for this option.
true(default): Files and directories are not deleted. Instead, they are archived and renamed in the formatarchived-{pvName}.{timestamp}.false: The corresponding directory and data in the backend are permanently deleted.This deletes only the NAS subdirectory and its contents—not the NAS file system itself.
To delete a NAS file system, see Delete a file system.
In scenarios with frequent creation and deletion of PVs, setting this to
falsemay block the CSI controller task queue and prevent new PVs from being created. See CSI controller task queue is full and new PVs cannot be created when using NAS dynamically provisioned volumes.provisionerThe type of driver. When you use the Alibaba Cloud NAS CSI component, this parameter is fixed at
nasplugin.csi.alibabacloud.com.reclaimPolicyRecycling policy for the PV.
Delete(default): When you delete the PVC, the system handles backend storage data according to thearchiveOnDeletesetting.Retain: When you delete the PVC, the PV and NAS file remain intact. You must delete them manually. Use this for high-security scenarios to prevent accidental data loss.
allowVolumeExpansionSupported only for General-purpose NAS
Allows online expansion of PVs created by this StorageClass by modifying the PVC capacity.
This StorageClass uses NAS directory quota to manage and limit PV capacity precisely. To expand online, edit the
spec.resources.requests.storagefield in the PVC. See Set directory quotas for NAS dynamically provisioned volumes.Applying NAS directory quotas is asynchronous. Immediately after creating or expanding a PV, high-speed, bulk writes may exceed the quota before it fully takes effect. See Limits for more details.
Create the StorageClass.
kubectl create -f alicloud-nas-subpath.yaml
Console
Log on to the ACK console. In the left navigation pane, click Clusters.
On the Clusters page, click the name of the target cluster. In the navigation pane on the left, choose .
Click Create. Enter a unique name for the StorageClass in the cluster and set Volume Type to NAS. Complete the creation of the StorageClass as prompted.
The following table describes the main configuration items.
Configuration item
Description
Select mount target
The mount target address of the NAS file system.
Volume mode
Volume access mode. In this example, select Subdirectory (subpath). The system automatically creates a subdirectory under the mount path. Data is stored under
<NAS mount target>:<mount path>/<pv-name>/.Mount path
The subdirectory of the NAS file system to mount. If you do not set this parameter, the root directory is mounted by default.
If the directory does not exist in the NAS file system, it is automatically created and mounted.
General-purpose NAS file system: The root directory is
/.Extreme NAS file system: The root directory is
/share. When you mount a subdirectory, thepathmust start with/share(for example,/share/data).
Recycling policy
Recycling policy for the PV.
Delete(default): When you delete the PVC, the system handles backend storage data according to thearchiveOnDeletesetting.Retain: When you delete the PVC, the PV and NAS file remain intact. You must delete them manually. Use this for high-security scenarios to prevent accidental data loss.
Mount options
Mount options for NAS, including the NFS protocol version. By default, NAS mounts using NFS v3. To specify another version, use
vers=4.0. For supported NFS versions per NAS type, see NFS protocol.After the StorageClass is created, you can view it in the Storage Classes list.
2. Create a PVC
A PVC represents your application’s request for storage. It triggers the dynamic provisioning mechanism of the StorageClass to automatically create and bind a matching PV.
kubectl
Create a file named
nas-pvc.yamlthat has the following content.kind: PersistentVolumeClaim apiVersion: v1 metadata: name: nas-csi-pvc spec: accessModes: - ReadWriteMany # Specify the StorageClass to bind. storageClassName: alicloud-nas-subpath resources: requests: # Declare the required volume capacity. storage: 20GiParameter
Description
accessModesThe access mode. Valid values:
ReadWriteMany(default): The volume can be mounted as read-write by many nodes.ReadWriteOnce: The volume can be mounted as read-write by a single node.ReadOnlyMany: The volume can be mounted as read-only by many nodes.
storageClassNameThe StorageClass to bind.
storageDeclare the required volume capacity. By default, this serves as a resource request and does not restrict the actual storage space available to the pod.
The maximum NAS capacity depends on the specification. See General-purpose NAS and Extreme NAS.
However, when
allowVolumeExpansionis set totruein the StorageClass, this value becomes a hard limit. The CSI driver sets a NAS directory quota to enforce this limit.Create the PVC.
kubectl create -f nas-pvc.yamlView the PVs.
kubectl get pvExpected output: A PV is automatically created and bound to the PVC based on the StorageClass.
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS VOLUMEATTRIBUTESCLASS REASON AGE nas-a7540d97-0f53-4e05-b7d9-557309****** 20Gi RWX Retain Bound default/nas-csi-pvc alicloud-nas-subpath <unset> 5m
Console
In the left-side navigation pane of the details page, choose .
On the Persistent Volume Claims page, click Create. Configure the PVC and click Create.
Configuration item
Description
Persistent volume claim type
Select NAS.
Name
PVC name. Must be unique within the namespace.
Allocation mode
Select Dynamically create using storage class.
Existing storage class
Click Select storage class and choose the StorageClass you created earlier.
Total
Declare the required volume capacity. By default, this serves as a resource request and does not restrict the actual storage space available to the pod.
The maximum NAS capacity depends on the specification. See General-purpose NAS and Extreme NAS.
However, when
allowVolumeExpansionis set totruein the StorageClass, this value becomes a hard limit. The CSI driver sets a NAS directory quota to enforce this limit.Access mode
The access mode. Valid values:
ReadWriteMany(default): The volume can be mounted as read-write by many nodes.ReadWriteOnce: The volume can be mounted as read-write by a single node.ReadOnlyMany: The volume can be mounted as read-only by many nodes.
3. Create an application and mount NAS
After you create the PVC, mount its bound PV to your application. This section describes how to create two deployments that reference the same PVC to share the same NAS subdirectory.
kubectl
Create two deployments and mount the same PVC to share the same subdirectory of the same NAS file system.
To mount different subdirectories of the same NAS file system to multiple pods, you must create separate StorageClasses and corresponding PVCs for each subdirectory, and then mount each PVC separately.
Create two files named
nginx-1.yamlandnginx-2.yamlthat have the following content.The configurations for both applications are nearly identical. Both reference the same PVC.
Create the two deployments.
kubectl create -f nginx-1.yaml -f nginx-2.yamlYou can check pod status.
kubectl get pod -l app=nginxExpected output
NAME READY STATUS RESTARTS AGE nas-test-1-b75d5b6bc-***** 1/1 Running 0 51s nas-test-2-b75d5b6bc-***** 1/1 Running 0 44sCheck the detailed configuration of both pods to confirm that the PVC is mounted.
Replace
<podName>with the actual pod name.kubectl describe pod <podName> | grep "ClaimName:"In the expected output, both pods mount the same PVC and share the same NAS subdirectory.
Console
Repeat the following steps to create two deployments and mount the same PVC to share the same subdirectory of the same NAS file system.
On the ACK Clusters page, click the name of the target cluster. In the left navigation pane, choose .
Click Create from Image. Configure and create the application as prompted.
The following table describes the main parameters. Keep other parameters at their default values. For more information, see Create a Deployment.
Configuration item
Parameter
Description
Application basics
Replicas
Number of replicas for the deployment.
Container configuration
Image name
Image address used to deploy the application.
Required resources
vCPU and memory resources required.
Volumes
Click Add cloud storage claim, then complete the configuration.
Mount source: Select the PVC created earlier.
Container path: Enter the path in the container where the NAS file system will be mounted.
After the deployment completes, you can click the application name on the Stateless page and then confirm that the Pod is running normally on the Container Group tab. The Pod status must be Running.
To verify shared and persistent storage, see Verify shared and persistent storage.
Method 2: Mount using sharepath
In this mode, all PVCs created from this StorageClass use the same NAS directory that is defined in the StorageClass. No new directories are created for individual PVs.
1. Create a StorageClass
A StorageClass acts as a provisioning template for dynamic volumes. It defines where storage resources come from and how they behave.
kubectl
Create a file named alicloud-nas-sharepath.yaml that has the following content.
apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: alicloud-nas-sharepath mountOptions: - nolock,tcp,noresvport - vers=3 parameters: volumeAs: sharepath # Format of the server field: <nas-server-address>:/<path> server: "0cd8b4a576-g****.cn-hangzhou.nas.aliyuncs.com:/k8s" provisioner: nasplugin.csi.alibabacloud.com reclaimPolicy: RetainParameter
Description
mountOptionsMount options for NAS, including the NFS protocol version. By default, NAS mounts using NFS v3. To specify another version, use
vers=4.0. For supported NFS versions per NAS type, see NFS protocol.parameters.volumeAsMount mode. Set to
sharepath.parameters.serverThe NAS mount target address and the subdirectory path to mount. Format:
<nas-server-address>:/<path>.<nas-server-address>: The mount target address of the NAS file system. See Manage mount targets.:/<path>: The NAS subdirectory to mount. If unset or if the subdirectory does not exist, the root directory is mounted by default.General-purpose NAS: Root directory is
/.Extreme NAS: Root directory is
/share. When mounting a subdirectory, thepathmust start with/share(for example,/share/data).
provisionerThe type of driver. When you use the Alibaba Cloud NAS CSI component, this parameter is fixed at
nasplugin.csi.alibabacloud.com.reclaimPolicyRecycling policy for the PV. For sharepath, set this to
Retain.Create the StorageClass.
kubectl create -f alicloud-nas-sharepath.yaml
Console
On the Clusters page, click the name of the target cluster. In the navigation pane on the left, choose .
Click Create. Enter a unique name for the StorageClass in the cluster and set Volume Type to NAS. Complete the creation of the StorageClass as prompted.
Configuration item
Description
Select mount target
The mount target address of the NAS file system.
Volume mode
Volume access mode. In this example, select Shared directory (sharepath).
Mount path
The subdirectory of the NAS file system to mount. If you do not set this parameter, the root directory is mounted by default.
If the directory does not exist in the NAS file system, it is automatically created and mounted.
General-purpose NAS file system: The root directory is
/.Extreme NAS file system: The root directory is
/share. When you mount a subdirectory, thepathmust start with/share(for example,/share/data).
Recycling policy
For sharepath, set this to
Retain.Mount options
Mount options for NAS, including the NFS protocol version. By default, NAS mounts using NFS v3. To specify another version, use
vers=4.0. For supported NFS versions per NAS type, see NFS protocol.
2. Create a PVC
A PVC represents your application’s request for storage. It triggers the dynamic provisioning mechanism of the StorageClass to automatically create and bind a matching PV.
To enable data sharing across namespaces, this section describes how to create a PVC with the same name in two different namespaces. Although the PVCs share a name, they are independent resources because they reside in separate namespaces. They use the same StorageClass to obtain independent PVs on the same NAS file system.
kubectl
Create the `ns1` and `ns2` namespaces.
kubectl create ns ns1 kubectl create ns ns2Create a file named pvc.yaml to create PVCs that have the same name in the `ns1` and `ns2` namespaces.
kind: PersistentVolumeClaim apiVersion: v1 metadata: name: nas-csi-pvc namespace: ns1 spec: accessModes: - ReadWriteMany storageClassName: alicloud-nas-sharepath resources: requests: storage: 20Gi --- kind: PersistentVolumeClaim apiVersion: v1 metadata: name: nas-csi-pvc namespace: ns2 spec: accessModes: - ReadWriteMany storageClassName: alicloud-nas-sharepath resources: requests: storage: 20GiParameter
Description
accessModesThe access mode. Valid values:
ReadWriteMany(default): The volume can be mounted as read-write by many nodes.ReadWriteOnce: The volume can be mounted as read-write by a single node.ReadOnlyMany: The volume can be mounted as read-only by many nodes.
storageClassNameThe StorageClass to bind.
storageDeclare the required volume capacity. By default, this serves as a resource request and does not restrict the actual storage space available to the pod.
The maximum NAS capacity depends on the specification. See General-purpose NAS and Extreme NAS.
However, when
allowVolumeExpansionis set totruein the StorageClass, this value becomes a hard limit. The CSI driver sets a NAS directory quota to enforce this limit.Create the PVCs.
kubectl create -f pvc.yamlCheck the PV status to confirm that the PVs were automatically created and bound to the PVCs.
kubectl get pvExpected output: The status of both PVs is
Bound. TheCLAIMcolumn shows that each PV is bound to a PVC in a different namespace (ns1/nas-csi-pvcandns2/nas-csi-pvc).NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS VOLUMEATTRIBUTESCLASS REASON AGE nas-0b448885-6226-4d22-8a5b-d0768c****** 20Gi RWX Retain Bound ns1/nas-csi-pvc alicloud-nas-sharepath <unset> 74s nas-bcd21c93-8219-4a11-986b-fd934a****** 20Gi RWX Retain Bound ns2/nas-csi-pvc alicloud-nas-sharepath <unset> 74s
Console
Create namespaces.
On the Clusters page, find the cluster you want and click its name. In the left-side navigation pane, click Namespaces and Quotas.
Click Create. Follow the prompts to create the `ns1` and `ns2` namespaces.
In the left-side navigation pane of the details page, choose .
Create a persistent volume claim in the `ns1` namespace.
On the Persistent Volume Claims page, set Namespace to ns1 and follow the prompts to create the PVC.
Configuration item
Description
Persistent volume claim type
Select NAS.
Name
PVC name. Must be unique within the namespace.
Allocation mode
Select Dynamically create using storage class.
Existing storage class
Click Select storage class and choose the StorageClass created earlier.
Total
Declare the required volume capacity. By default, this serves as a resource request and does not restrict the actual storage space available to the pod.
The maximum NAS capacity depends on the specification. See General-purpose NAS and Extreme NAS.
However, when
allowVolumeExpansionis set totruein the StorageClass, this value becomes a hard limit. The CSI driver sets a NAS directory quota to enforce this limit.Access mode
The access mode. Valid values:
ReadWriteMany(default): The volume can be mounted as read-write by many nodes.ReadWriteOnce: The volume can be mounted as read-write by a single node.ReadOnlyMany: The volume can be mounted as read-only by many nodes.
Repeat the previous step to create a PVC in the `ns2` namespace.
After the PVCs are created, return to the Persistent Volume Claims page and confirm that the PVCs in the `ns1` and `ns2` namespaces are bound to the automatically created PVs.
3. Create an application and mount NAS
After you create the PVC, mount its bound PV to your application. This section describes how to create applications in two namespaces and mount their respective PVCs to share the NAS subdirectory that is defined in the StorageClass.
kubectl
Create two files named
nginx-ns1.yamlandnginx-ns2.yamlthat have the following content.The configurations for both applications are nearly identical. Each binds the PVC in its respective namespace.
Create the two deployments.
kubectl create -f nginx-ns1.yaml -f nginx-ns2.yamlCheck the pod status.
kubectl get pod -A -l app=nginxExpected output:
NAMESPACE NAME READY STATUS RESTARTS AGE ns1 nas-test-b75d5b6bc-***** 1/1 Running 0 2m19s ns2 nas-test-b75d5b6bc-***** 1/1 Running 0 2m11sCheck the pod configuration to confirm that the PVC is mounted.
Replace
<namespace-name>and<pod-name>with the actual namespace and pod names.kubectl describe pod -n <namespace-name> <pod-name> | grep "ClaimName:"In the expected output, the two pods mount
ns1/nas-csi-pvcandns2/nas-csi-pvc, respectively.
Console
On the ACK Clusters page, click the name of the target cluster. In the left navigation pane, choose .
Create a deployment in the `ns1` namespace and mount the corresponding PVC.
Set Namespace to ns1 and click Create from Image.
Complete the application creation as prompted.
The following table describes the main parameters. Keep other parameters at their default values. For more information, see Create a Deployment.
Configuration item
Parameter
Description
Application basics
Replicas
Number of replicas for the deployment.
Container configuration
Image name
Image address used to deploy the application.
Required resources
vCPU and memory resources required.
Volumes
Click Add cloud storage claim, then complete the configuration.
Mount source: Select the PVC created earlier.
Container path: Enter the path in the container where the NAS file system will be mounted, such as /data.
Repeat the previous step to create a deployment in the `ns2` namespace and mount the corresponding PVC.
You can return to the Stateless page to check the deployment status of the two deployments in the ns1 and ns2 namespaces, and to confirm that the pods are running properly with their corresponding PVCs mounted.
To verify shared and persistent storage, see Verify shared and persistent storage.
Method 3: Mount using filesystem
The filesystem mode is suitable for scenarios where you need to dynamically create and manage dedicated NAS file systems and mount targets for your application. Unlike the sharepath mode, each PV created in filesystem mode maps to an independent NAS file system instance.
Each filesystem-mode PV maps to one independent NAS file system and one mount target.
By default, NAS file systems and mount targets are retained when you delete a filesystem-mode PV. To delete them along with the PV, configure the following in the StorageClass:
reclaimPolicy: Deleteparameters.deleteVolume: "true"
When you use ACK dedicated clusters, you must grant the appropriate permissions to csi-provisioner.
1. Create a StorageClass
2. Create a PVC
3. Create an application and mount NAS
Verify shared and persistent storage
After you successfully deploy your application, you can verify that the volume works as expected. This section uses the subpath mode and the nas-test-1 and nas-test-2 deployments for verification.
Shared storage | Persistent storage |
Create a file in one pod and check for it in another pod to verify shared storage.
| Restart the deployment and check for the file in the new pod to verify persistent storage.
|
Production best practices
Security and data protection
Use the Retain recycling policy: Set
reclaimPolicytoRetainin your StorageClass to prevent accidental deletion of backend data when you delete a PVC.Use permission groups for access control: NAS uses permission groups to manage network access permissions. Follow the principle of least privilege. In the permission group, add only the private IP addresses of cluster nodes or the vSwitch CIDR block to which they belong. Avoid granting overly broad permissions, such as
0.0.0.0/0.
Performance and cost optimization
Select a suitable NAS type: See Select a file system type to choose a NAS type based on the IOPS and throughput requirements of your application.
Optimize mount options (
mountOptions): Adjust NFS mount parameters based on workload characteristics. For example, using thevers=4.0orvers=4.1protocol version may provide better performance and file locking capabilities in some scenarios. For large-scale file reads and writes, you can test adjusting thersizeandwsizeparameters to optimize read and write performance.
O&M and reliability
Configure health checks: Configure liveness probes for application pods to check if the mount target is normal. If a mount fails, ACK can automatically restart the pod to trigger a remount of the volume.
Monitoring and alerting: Use container storage monitoring to configure alerting and promptly detect volume exceptions or performance bottlenecks.
Resource cleanup guidance
To avoid unexpected charges, release the related resources in the following order when you no longer need the NAS storage volumes.
Delete workloads
Action: Delete all applications that use the NAS storage volume, such as Deployments and StatefulSets. This stops pods from mounting the volume and reading data from or writing data to the volume.
Command example:
kubectl delete deployment <your-deployment-name>
Delete PVCs
Action: Delete the PVCs that are associated with your applications. After you delete a PVC, the behavior of its bound PV and the backend NAS depends on the
reclaimPolicythat is defined in the corresponding StorageClass.subpath mode
reclaimPolicy: Retain: After you delete the PVC, the bound PV enters theReleasedstate. The PV object and the corresponding NAS subdirectory and data are retained. You must delete them manually.reclaimPolicy: Delete: After you delete the PVC, the bound PV is automatically deleted. How the backend NAS subdirectory is handled depends on thearchiveOnDeleteparameter:archiveOnDelete: "true": The backend data is not deleted. Instead, the data is renamed and archived asarchived-{pvName}.{timestamp}.archiveOnDelete: "false": The subdirectory on the backend NAS that corresponds to the PV and all its data are permanently deleted. Use this with caution.
In ACK Serverless clusters, due to permission restrictions, even if you set
reclaimPolicy: Delete, the backend NAS directory and data are not deleted or archived. Only the PV object is deleted.
sharepath mode
The
reclaimPolicyonly supportsRetain. After you delete the PVC, the bound PV enters theReleasedstate. Because the directory is a shared directory, the PV object, the backend NAS shared directory, and its data are retained.filesystem mode
reclaimPolicy: Retain: After you delete the PVC, the bound PV enters theReleasedstate. The PV object, the dynamically created backend NAS file system, and the mount target are all retained.reclaimPolicy: Delete: After you delete the PVC, the bound PV is automatically deleted. How the backend NAS file system is handled depends on thedeleteVolumeparameter:deleteVolume: "false": The backend NAS file system and mount target are retained. You must delete them manually.deleteVolume: "true": The backend NAS file system and mount target are automatically deleted. Use this with caution.
Command example:
kubectl delete pvc <your-pvc-name>
Delete a PV
Action: You can delete a PV when its status is
AvailableorReleased. This action only removes the PV definition from the Kubernetes cluster. It does not delete the data on the backend NAS file system.Command example:
kubectl delete pv <your-pv-name>
Delete the backend NAS file system (Optional)
subpathandsharepathmodes: For more information, see Delete a file system. This action permanently deletes all data on the NAS and this data cannot be recovered. Exercise caution when you perform this action. Before you perform this action, you must confirm that the NAS has no business dependencies.filesystemmode: If the backend NAS file system is not automatically deleted when you delete the PVC, see Delete a file system to locate and manually delete the corresponding file system.
References
If you encounter issues when you mount and use NAS storage volumes, see the following documents for troubleshooting.
CNFS lets you independently manage NAS file systems, which improves NAS file system performance and Quality of Service (QoS) control. For more information, see Manage NAS file systems with CNFS.
NAS volumes that are mounted using the subpath mode support directory quotas. For more information, see Set directory quotas for NAS dynamically provisioned volumes.
