All Products
Search
Document Center

File Storage NAS:Mount a dynamically provisioned NAS volume using NFS

Last Updated:Dec 01, 2025

Network Attached Storage (NAS) volumes are ideal for scenarios such as big data analytics, data sharing, web applications, and log persistence. Instead of manually creating and configuring static storage resources, you can use a persistent volume claim (PVC) and a StorageClass to dynamically provision them. Doing so allows the system to automatically create a persistent volume (PV) for you. You can mount a dynamically provisioned NAS volume using three methods: subpath, sharepath, and filesystem.

Prerequisites

  • The CSI plug-in is installed in the cluster. If an upgrade is required, refer to Upgrade csi-plugin and csi-provisioner.

    Note

    If your cluster uses FlexVolume, upgrade to CSI, because FlexVolume is deprecated. For details, see Upgrade from FlexVolume to CSI. To verify your storage component type, go to the Add-ons page, and click the Storage tab.

  • The File Storage NAS service is activated.

    If you are using it for the first time, follow the on-screen instructions on the File Storage NAS product page to activate the service.

Limitations

  • Mounting NAS file systems that use the SMB protocol is not supported.

  • NAS file systems can only be mounted to pods within the same virtual private cloud (VPC). Cross-VPC mounting is not supported.

    Note

    Within the same VPC, NAS volumes can be mounted across different availability zones (AZs).

  • General-purpose and Extreme NAS file systems have different constraints regarding connectivity, the number of file systems, and protocol types. For details, see Limits.

Usage notes

Mounting methods

The volumeAs parameter in a StorageClass defines the relationship between a PV and a NAS file system or its subdirectory. Select a mounting method as needed.

Mounting method

Description

Use cases

Using subpath

Creates a subdirectory-type PV, where each PV corresponds to a unique subdirectory within the same NAS file system.

  • Multiple pods mounting the same subdirectory of a NAS file system.

  • Multiple pods mounting different subdirectories of the same NAS file system.

Using sharepath

Creates PVs that all point to the same shared directory specified in the StorageClass. No new subdirectories are created per PV.

Multiple pods across different namespaces need to mount the same NAS subdirectory.

Using filesystem (Not recommended)

Automatically creates a NAS file system for each PV. One PV corresponds to an entire NAS file system.

An application requires a dedicated NAS file system that needs to be dynamically created and deleted with the workload.

Mount using the subpath method

Important

The subpath method requires Container Storage Interface (CSI) component version 1.31.4 or later. To upgrade, see Upgrade csi-plugin and csi-provisioner.

Step 1: Get NAS file system and mount target information

  1. Log on to the NAS console. In the left navigation pane, choose File System > File System List.

  2. Create a NAS file system and a mount target.

    Ensure the file system uses the Network File System (NFS) protocol and the mount target is in the same VPC as your cluster nodes.

    • If you have an existing NAS file system, make sure that it meets these requirements.

    • If you don't have an existing NAS file system, create one. For instructions, see Create a file system and Manage mount targets.

  3. Get the mount target address.

    1. Click the file system ID. In the left navigation pane, click Mount Targets.

    2. In the Mount Target section, confirm the status is Available and copy the mount target address.

Step 2: Create a StorageClass

kubectl

  1. Modify the following YAML manifest and save it as alicloud-nas-subpath.yaml.

    apiVersion: storage.k8s.io/v1
    kind: StorageClass
    metadata:
      name: alicloud-nas-subpath
    mountOptions:
    - nolock,tcp,noresvport
    - vers=3
    parameters:
      volumeAs: subpath
      server: "0cd8b4a576-g****.cn-hangzhou.nas.aliyuncs.com:/k8s"
      archiveOnDelete: "true"
    provisioner: nasplugin.csi.alibabacloud.com
    reclaimPolicy: Retain
    allowVolumeExpansion: true

    Parameter

    Description

    mountOptions

    The Mount options for the NAS volume, including the NFS protocol version. We recommend using NFSv3. Extreme NAS only supports NFSv3.

    parameters

    volumeAs

    The mount method. In this example, the value is set to subpath to create a subdirectory-type PV. One PV corresponds to one subdirectory in a NAS file system.

    server

    The address of the mount target and the subdirectory of the NAS file system to mount. The format is <NAS mount target address>:<mount directory>. If you do not specify a subdirectory, the root directory / is mounted by default.

    archiveOnDelete

    Specifies whether to delete the backend storage data when reclaimPolicy is set to Delete. This parameter is added for confirmation because NAS is a shared storage service.

    • true (default): The directory or file is retained and is renamed to archived-{pvName}.{timestamp}.

    • false: The backend storage resource is permanently deleted.

    Note
    • For high-traffic workloads, setting this parameter to false is not recommended. For more information, see NAS volume FAQ.

    • To completely delete the backend storage data, you must set parameters.archiveOnDelete to false using kubectl.

    provisioner

    The driver type. The value must be set to nasplugin.csi.alibabacloud.com, indicating that the Alibaba Cloud NAS CSI plugin is used.

    reclaimPolicy

    The PV reclaim policy. The default value is Delete. Retain is also supported.

    • Delete: This value must be used with archiveOnDelete.

      • When archiveOnDelete is true, the files in the NAS file system are renamed but not deleted when the PVC is deleted.

      • When archiveOnDelete is false, the files in the NAS file system are deleted when the PVC is deleted.

        Important

        The subpath directory and its files in the NAS file system will be deleted. The NAS file system itself is retained. To delete the NAS file system, see Delete a file system.

    • Retain: When the PVC is deleted, the PV and the files in the NAS file system are retained. You must manually delete them.

    If data security is a high priority, we recommend setting this parameter to Retain to prevent accidental data loss.

    allowVolumeExpansion

    Supported for General-purpose NAS file systems only. If set to true, a directory quota is configured for the PV that is dynamically created by the StorageClass to limit the available capacity. You can also update the PVC to expand the volume capacity. For more information, see Expand a dynamically provisioned NAS volume.

    Note

    The NAS quota takes effect asynchronously. After a PV is dynamically created, the directory quota may not take effect immediately. If you write a large amount of data in a short period, the storage usage may exceed the capacity limit. See Directory quotas.

  2. Create the StorageClass.

    kubectl create -f alicloud-nas-subpath.yaml

Console

  1. Log on to the ACK console. In the left navigation pane, click Clusters.

  2. On the Clusters page, find the cluster you want and click its name. In the left-side pane, choose Volumes > StorageClasses.

  3. On the StorageClasses page, click Create.

  4. In the dialog box that appears, configure the parameters and click OK.

    The following table describes the configurations:

    Configuration

    Description

    Example

    Name

    The StorageClass name. See the UI for formatting requirements.

    alicloud-nas-subpath

    PV Type

    Select NAS.

    NAS

    Select Mount Target

    The address of the NAS file system's mount target.

    0cd8b4a576-g****.cn-hangzhou.nas.aliyuncs.com

    Volume Mode

    The access mode of the volume. In this example, select Subdirectory to use the subpath method. A unique subdirectory is automatically created under the mount path for each PV. Data is stored at <NAS mount target>:<mount path>/<pv-name>/.

    Note

    This mode requires CSI component version 1.31.4 or later. Otherwise, the system defaults to Shared Directory mode.

    Subdirectory

    Mount Path

    The subdirectory on the NAS file system to mount.

    • If not set, the root directory is mounted by default.

    • If the specified directory does not exist, it will be automatically created and then mounted.

    Note

    The root directory is / for General-purpose NAS file system and /share for Extreme NAS file system. When mounting a subdirectory on an Extreme NAS file system, the path must start with /share, such as /share/data.

    /k8s

    Reclaim Policy

    The reclaim policy for the PV. Retain is recommended to prevent accidental data loss.

    • Delete: This parameter must be configure with archiveOnDelete. In the console, selecting Delete will not actually delete the data on the NAS volume when you delete the PVC. This is because the underlying archiveOnDelete parameter cannot be configured through the UI. To configure archiveOnDelete, create the PV using a YAML manifest. For a YAML template, see the kubectl tab.

    • Retain: When the PVC is deleted, the PV and the data on the NAS volume are not deleted. You must manually delete them.

    Retain

    Mount Options

    The Mount options for the NAS volume, including the NFS protocol version. We recommend using NFSv3. Extreme NAS only supports NFSv3.

    Keep the default value

    After the StorageClass is created, you can view it in the StorageClasses list.

Step 3: Create a PVC

kubectl

  1. Modify the YAML manifest and save it as nas-pvc.yaml.

    kind: PersistentVolumeClaim
    apiVersion: v1
    metadata: 
      name: nas-csi-pvc
    spec:
      accessModes:
      - ReadWriteMany 
      storageClassName: alicloud-nas-subpath
      resources: 
        requests:
          storage: 20Gi

    Parameter

    Description

    accessModes

    The access mode for the volume. The default value is ReadWriteMany. ReadWriteOnce and ReadOnlyMany are also supported.

    storageClassName

    The name of the StorageClass to bind.

    storage

    The capacity of the volume that you want to request.

    Important
    • By default, the actual available capacity of a NAS volume is not limited by this configuration. It is determined by the specifications of the NAS file system. For more information, see General-purpose NAS and Extreme NAS.

    • If you use a General-purpose NAS file system and set allowVolumeExpansion of the StorageClass to true, the CSI component sets a directory quota based on this configuration to limit the available capacity of the NAS volume.

  2. Create the PVC.

    kubectl create -f nas-pvc.yaml
  3. Verify that the PV was created and bound to the PVC.

    kubectl get pvc

    The output should show the STATUS as Bound, indicating that the CSI component automatically created a PV based on the StorageClass and bound the PV to the PVC.

    NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                 STORAGECLASS             VOLUMEATTRIBUTESCLASS   REASON   AGE
    nas-a7540d97-0f53-4e05-b7d9-557309******   20Gi       RWX            Retain           Bound    default/nas-csi-pvc   alicloud-nas-subpath     <unset>                          5m

Console

  1. In the left-side navigation pane of the details page, choose Volumes > Persistent Volume Claims.

  2. On the Persistent Volume Claims page, click Create.

  3. On the Create PVC dialog box, configure the parameters and click OK.

    Configuration

    Description

    Example

    PVC Type

    Select NAS.

    NAS

    Name

    The PVC name. It must be unique within the namespace.

    pvc-nas

    Allocation Mode

    In this example, select Use StorageClass.

    Use StorageClass

    Existing StorageClass

    Click Select StorageClass and select the one you created in the previous step.

    alicloud-nas-subpath

    Capacity

    The capacity of the volume. This setting does not limit the maximum capacity the application can use.

    Important
    • By default, the actual available capacity of a NAS volume is not limited by this configuration. It is determined by the specifications of the NAS file system. For more information, see General-purpose NAS and Extreme NAS.

    • If you use a General-purpose NAS file system and set allowVolumeExpansion of the StorageClass to true, the CSI component sets a directory quota based on this configuration to limit the available capacity of the NAS volume.

    20Gi

    Access Mode

    The default value is ReadWriteMany. You can also select ReadWriteOnce or ReadOnlyMany.

    ReadWriteMany

Step 4: Create an application and mount the NAS volume

kubectl

Create two Deployments and mount the same PVC to them. This allows them to share the same subdirectory in the same NAS file system.

Note

To assign different pods to unique subdirectories on the same NAS file system, create a distinct StorageClass and PVC for each target directory.

  1. Modify the following YAML manifest and save the files as nginx-1.yaml and nginx-2.yaml respectively.

    The following configurations in nginx-1.yaml and nginx-2.yaml are the same, except for the metadata.name value. The two applications are bound to the same PVC.

    nginx-1.yaml
    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: nas-test-1     
      labels:
        app: nginx
    spec:
      selector:
        matchLabels:
          app: nginx
      template:
        metadata:
          labels:
            app: nginx
        spec:
          containers:
          - name: nginx
            image: anolis-registry.cn-zhangjiakou.cr.aliyuncs.com/openanolis/nginx:1.14.1-8.6
            ports:
            - containerPort: 80
            volumeMounts:
              - name: nas-pvc
                mountPath: "/data"           # The path where the NAS volume is mounted in the container
          volumes:
            - name: nas-pvc                 
              persistentVolumeClaim:
                claimName: nas-csi-pvc       # Used to bind the PVC
    nginx-2.yaml
    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: nas-test-2     
      labels:
        app: nginx
    spec:
      selector:
        matchLabels:
          app: nginx
      template:
        metadata:
          labels:
            app: nginx
        spec:
          containers:
          - name: nginx
            image: anolis-registry.cn-zhangjiakou.cr.aliyuncs.com/openanolis/nginx:1.14.1-8.6
            ports:
            - containerPort: 80
            volumeMounts:
              - name: nas-pvc
                mountPath: "/data"           # The path where the NAS volume is mounted in the container
          volumes:
            - name: nas-pvc                 
              persistentVolumeClaim:
                claimName: nas-csi-pvc       # Used to bind the PVC
  2. Create the two Deployments.

    kubectl create -f nginx-1.yaml -f nginx-2.yaml
  3. Verify the pods are running.

    kubectl get pod -l app=nginx

    The output indicates that the same subdirectory of the same NAS file system is mounted to different applications.

    NAME                         READY   STATUS    RESTARTS   AGE
    nas-test-1-b75d5b6bc-vqwq9   1/1     Running   0          51s
    nas-test-2-b75d5b6bc-8k9vx   1/1     Running   0          44s

Console

Repeat the following steps to create two Deployments that mount the same PVC, enabling them to share a single subdirectory within the NAS file system.

  1. In the navigation pane on the left of the cluster details page, go to Workloads > Deployments.

  2. On the Deployments page, click Create From Image.

  3. Configure the parameters to create the application.

    The following table describes the key parameters. You can keep the default settings for other parameters. For more information, see Create a stateless workload (Deployment).

    Configuration step

    Parameter

    Description

    Example

    Basic Information

    Name

    Enter a custom name for the Deployment. The name must meet the format requirements displayed in the console.

    deployment-nas-1

    Replicas

    Number of pod replicas.

    1

    Container

    Image Name

    The address of the image used to deploy the application.

    anolis-registry.cn-zhangjiakou.cr.aliyuncs.com/openanolis/nginx:1.14.1-8.6

    Required Resources

    The required vCPU and memory resources.

    0.25 vCPU, 512 MiB

    Volume

    Click Add PVC and configure the parameters.

    • Mount Source: Select the PVC that you created.

    • Container Path: Specify the container path to which you want to mount the NAS file system.

    • Mount Source: pvc-nas

    • Container Path: /data

    nas1.png

  4. View the application deployment status.

    1. On the Deployments page, click the name of the application.

    2. On the Pods tab, confirm pods are in the Running state.

Mount using the sharepath method

Step 1: Get NAS file system and mount target information

  1. Log on to the NAS console. In the left navigation pane, choose File System > File System List.

  2. Create a NAS file system and a mount target.

    Ensure the file system uses the Network File System (NFS) protocol and the mount target is in the same VPC as your cluster nodes.

    • If you have an existing NAS file system, make sure that it meets these requirements.

    • If you don't have an existing NAS file system, create one. For instructions, see Create a file system and Manage mount targets.

  3. Get the mount target address.

    1. Click the file system ID. In the left navigation pane, click Mount Targets.

    2. In the Mount Target section, confirm the status is Available and copy the mount target address.

Step 2: Create a StorageClass

kubectl

  1. Create a file named alicloud-nas-sharepath.yaml with the following YAML manifest and modify the parameters as needed.

    apiVersion: storage.k8s.io/v1
    kind: StorageClass
    metadata:
      name: alicloud-nas-sharepath
    mountOptions:
    - nolock,tcp,noresvport
    - vers=3
    parameters:
      volumeAs: sharepath
      server: "0cd8b4a576-g****.cn-hangzhou.nas.aliyuncs.com:/sharepath"
    provisioner: nasplugin.csi.alibabacloud.com
    reclaimPolicy: Retain

    Parameter

    Description

    mountOptions

    The Mount options for the NAS volume, including the NFS protocol version. We recommend using NFSv3. Extreme NAS only supports NFSv3.

    parameters

    volumeAs

    The mount method. In this example, the value is set to sharepath, which indicates that when a PV is created, no actual directory will be created. Instead, the path specified in the StorageClass will be used. This means that each PV will map to the same NAS directory.

    server

    The mount target address and the subdirectory of the NAS file system to be mounted. The format is <NAS mount target address>:<mount directory>.

    • If you do not specify a subdirectory, the root directory / is mounted by default.

    • If the directory does not exist in the NAS file system, it will be automatically created and then mounted.

    The root directory of a General-purpose NAS file system is /, while for Extreme NAS file system is /share. When mounting a subdirectory of an Extreme NAS file system, the path must start with /share, such as /share/data.

    provisioner

    The driver type. The value must be set to nasplugin.csi.alibabacloud.com, indicating that the Alibaba Cloud NAS CSI plugin is used.

    reclaimPolicy

    The PV reclaim policy. When you use the sharepath method, you must set this parameter to Retain.

  2. Create the StorageClass.

    kubectl create -f alicloud-nas-sharepath.yaml

Console

  1. Log on to the ACK console. In the left navigation pane, click Clusters.

  2. On the Clusters page, find the cluster you want and click its name. In the left-side pane, choose Volumes > StorageClasses.

  3. On the StorageClasses page, click Create.

  4. In the dialog box that appears, configure the parameters and click OK.

    The following table describes the key configurations:

    Configuration

    Description

    Example

    Name

    The StorageClass name. See the UI for formatting requirements.

    alicloud-nas-sharepath

    PV Type

    Select NAS.

    NAS

    Select Mount Target

    The address of the NAS file system's mount target.

    0cd8b4a576-g****.cn-hangzhou.nas.aliyuncs.com

    Volume Mode

    The access mode of the volume. In this example, select Shared Directory to use the sharepath method. When a PV is created, no actual directory is created. Instead, the path specified in the StorageClass is used. This means that each PV will map to the same NAS directory. This method is ideal for scenarios where you need to share a directory across namespaces.

    Shared Directory

    Mount Path

    The subdirectory on the NAS file system to mount.

    • If not set, the root directory is mounted by default.

    • If the specified directory does not exist, it will be automatically created and then mounted.

    Note

    The root directory is / for General-purpose NAS file system and /share for Extreme NAS file system. When mounting a subdirectory on an Extreme NAS file system, the path must start with /share, such as /share/data.

    /sharepath

    Reclaim Policy

    When you use the sharepath method, you must set this parameter to Retain.

    Retain

    Mount Options

    The Mount options for the NAS volume, including the NFS protocol version. We recommend using NFSv3. Extreme NAS only supports NFSv3.

    Keep the default value

    After the StorageClass is created, you can view it in the Storage Classes list.

Step 3: Create a PVC

The following example shows how to create PVCs in two different namespaces.

kubectl

To mount a NAS volume to pods in different namespaces, first create two namespaces.

  1. Create the ns1 and ns2 namespaces.

    kubectl create ns ns1
    kubectl create ns ns2
  2. Modify the following YAML manifest and save it as pvc.yaml.

    kind: PersistentVolumeClaim
    apiVersion: v1
    metadata: 
      name: nas-csi-pvc
      namespace: ns1
    spec:
      accessModes:
      - ReadWriteMany 
      storageClassName: alicloud-nas-sharepath
      resources: 
        requests:
          storage: 20Gi
    ---
    kind: PersistentVolumeClaim
    apiVersion: v1
    metadata: 
      name: nas-csi-pvc
      namespace: ns2
    spec:
      accessModes:
      - ReadWriteMany 
      storageClassName: alicloud-nas-sharepath
      resources: 
        requests:
          storage: 20Gi

    Parameter

    Description

    accessModes

    The access mode for the volume. The default value is ReadWriteMany. ReadWriteOnce and ReadOnlyMany are also supported.

    storageClassName

    The name of the StorageClass to bind.

    storage

    The capacity of the volume that you want to request.

    Important
    • By default, the actual available capacity of a NAS volume is not limited by this configuration. It is determined by the specifications of the NAS file system. For more information, see General-purpose NAS and Extreme NAS.

    • If you use a General-purpose NAS file system and set allowVolumeExpansion of the StorageClass to true, the CSI component sets a directory quota based on this configuration to limit the available capacity of the NAS volume.

  3. Create the PVCs.

    kubectl create -f pvc.yaml
  4. Verify that the PV was created and bound to the PVC.

    kubectl get pv

    The output should show that the CSI component automatically created two PVs based on the StorageClass and bound them to the two PVCs in different namespaces.

    NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                 STORAGECLASS             VOLUMEATTRIBUTESCLASS   REASON   AGE
    nas-0b448885-6226-4d22-8a5b-d0768c******   20Gi       RWX            Retain           Bound    ns1/nas-csi-pvc       alicloud-nas-sharepath   <unset>                          74s
    nas-bcd21c93-8219-4a11-986b-fd934a******   20Gi       RWX            Retain           Bound    ns2/nas-csi-pvc       alicloud-nas-sharepath   <unset>                          74s

Console

  1. Create the ns1 and ns2 namespaces. For details, see Create a namespace.

  2. In the left-side navigation pane of the details page, choose Volumes > Persistent Volume Claims.

  3. Create a PVC in the ns1 namespace.

    1. On the Persistent Volume Claims page, set Namespace to ns1 and click Create.

    2. On the Create PVC dialog box, configure the parameters and click OK.

      Configuration

      Description

      Example

      PVC Type

      Select NAS.

      NAS

      Name

      The PVC name. It must be unique within the namespace.

      pvc-nas

      Allocation Mode

      In this example, select Use StorageClass.

      Use StorageClass

      Existing StorageClass

      Click Select StorageClass and select the one you created.

      alicloud-nas-sharepath

      Capacity

      The capacity of the volume.

      20Gi

      Access Mode

      The default value is ReadWriteMany. You can also select ReadWriteOnce or ReadOnlyMany.

      ReadWriteMany

  4. Repeat the preceding steps to create another PVC in the ns2 namespace.

  5. Return to the Persistent Volume Claims page. In the ns1 and ns2 namespaces, make sure that the two PVCs are bound to the automatically created PVs.

Step 4: Create an application and mount the NAS volume

Create applications in two different namespaces and mount the PVCs in the corresponding namespaces. The applications will share the NAS directory defined in the StorageClass.

kubectl

  1. Save the following YAML content as nginx-ns1.yaml and nginx-ns2.yaml respectively, and modify them as needed.

    The following configurations in nginx-ns1.yaml and nginx-ns2.yaml are the same, except for the metadata.namespace value. The two applications are bound to the PVC in their respective namespaces.

    nginx-ns1.yaml
    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: nas-test
      namespace: ns1   
    spec:
      selector:
        matchLabels:
          app: nginx
      template:
        metadata:
          labels:
            app: nginx
        spec:
          containers:
          - name: nginx
            image: anolis-registry.cn-zhangjiakou.cr.aliyuncs.com/openanolis/nginx:1.14.1-8.6
            ports:
            - containerPort: 80
            volumeMounts:
              - name: nas-pvc
                mountPath: "/data"
          volumes:
            - name: nas-pvc
              persistentVolumeClaim:
                claimName: nas-csi-pvc
    nginx-ns2.yaml
    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: nas-test
      namespace: ns2   
    spec:
      selector:
        matchLabels:
          app: nginx
      template:
        metadata:
          labels:
            app: nginx
        spec:
          containers:
          - name: nginx
            image: anolis-registry.cn-zhangjiakou.cr.aliyuncs.com/openanolis/nginx:1.14.1-8.6
            ports:
            - containerPort: 80
            volumeMounts:
              - name: nas-pvc
                mountPath: "/data"
          volumes:
            - name: nas-pvc
              persistentVolumeClaim:
                claimName: nas-csi-pvc
  2. Create the two Deployments.

    kubectl create -f nginx-ns1.yaml -f nginx-ns2.yaml
  3. Verify that the pods are running.

    kubectl get pod -A -l app=nginx

    The output indicates that the same subdirectory of the same NAS file system is mounted to pods in different namespaces.

    NAMESPACE   NAME                         READY   STATUS    RESTARTS   AGE
    ns1         nas-test-b75d5b6bc-ljvfd     1/1     Running   0          2m19s
    ns2         nas-test-b75d5b6bc-666hn     1/1     Running   0          2m11s

Console

  1. In the navigation pane on the left of the cluster details page, go to Workloads > Deployments.

  2. Create a Deployment in the ns1 namespace and mount the corresponding PVC.

    1. Set Namespace to ns1 and click Create From Image.

    2. Configure the parameters to create the application.

      The following table describes the key parameters. You can keep the default settings for other parameters. For more information, see Create a stateless workload (Deployment).

      Configuration step

      Parameter

      Description

      Example

      Basic Information

      Name

      Enter a custom name for the Deployment. The name must meet the format requirements displayed in the console.

      nginx

      Replicas

      Number of pod replicas.

      2

      Container

      Image Name

      The address of the image used to deploy the application.

      anolis-registry.cn-zhangjiakou.cr.aliyuncs.com/openanolis/nginx:1.14.1-8.6

      Required Resources

      The required vCPU and memory resources.

      0.25 vCPU, 512 MiB

      Volume

      Click Add PVC and configure the parameters.

      • Mount Source: Select the PVC that you created.

      • Container Path: Specify the container path to which you want to mount the NAS file system.

      • Mount Source: pvc-nas

      • Container Path: /data

      nas1.png

  3. Repeat the preceding steps to create another Deployment in the ns2 namespace and mount the corresponding PVC.

  4. Return to the Deployments page. In the ns1 and ns2 namespaces, view the two Deployment statuses. Confirm the pods are running and the corresponding PVCs are mounted.

Mount using the filesystem method

If your application needs to dynamically create and delete NAS file systems and mount targets, use the filesystem method to mount a NAS volume. A pod that uses a filesystem-type NAS volume can create only one file system and one mount target.

Important

By default, when a filesystem-type dynamically provisioned NAS volume is deleted, the file system and mount target are retained. To release the NAS file system and mount target at the same time as the PV resource is released, you must set reclaimPolicy to Delete and deleteVolume to true in the StorageClass.

Step 1: Grant RAM permissions (Required for ACK dedicated clusters only)

Filesystem-type NAS volumes involve the dynamic creation and deletion of NAS file systems and mount targets. Therefore, you must grant the required permissions to the csi-provisioner component for an ACK dedicated cluster.

Grant these permissions using a RAM policy with the following minimum set of actions:

{
    "Action": [
        "nas:DescribeMountTargets",
        "nas:CreateMountTarget",
        "nas:DeleteFileSystem",
        "nas:DeleteMountTarget",
        "nas:CreateFileSystem"
    ],
    "Resource": [
        "*"
    ],
    "Effect": "Allow"
}

Grant the permissions in one of the following ways:

  • Edit the custom policy of the Master RAM role for the ACK dedicated cluster to add the NAS-related permissions shown above. For more information, see Modify the document and description of a custom policy.自定义授权

  • Create a RAM user, grant the RAM policy described above, generate an AccessKey, and add the AccessKey to the env variable of the csi-provisioner.

    env:
    - name: CSI_ENDPOINT
      value: unix://socketDir/csi.sock
    - name: ACCESS_KEY_ID
      value: ""
    - name: ACCESS_KEY_SECRET
      value: ""

Step 2: Create a StorageClass

  1. Modify the following YAML manifest and save it as alicloud-nas-fs.yaml.

    apiVersion: storage.k8s.io/v1
    kind: StorageClass
    metadata:
      name: alicloud-nas-fs
    mountOptions:
    - nolock,tcp,noresvport
    - vers=3
    parameters:
      volumeAs: filesystem
      fileSystemType: standard
      storageType: Performance
      regionId: cn-beijing
      zoneId: cn-beijing-e
      vpcId: "vpc-2ze2fxn6popm8c2mzm****"
      vSwitchId: "vsw-2zwdg25a2b4y5juy****"
      accessGroupName: DEFAULT_VPC_GROUP_NAME
      deleteVolume: "false"
    provisioner: nasplugin.csi.alibabacloud.com
    reclaimPolicy: Retain

    Parameter

    Description

    mountOptions

    The Mount options for the NAS volume, including the NFS protocol version. We recommend using NFSv3. Extreme NAS only supports NFSv3.

    parameters

    volumeAs

    The mount method. In this example, the value is set to filesystem to automatically create a NAS file system. One PV corresponds to one NAS file system.

    fileSystemType

    The type of the NAS file system. Valid values:

    • standard (default): General-purpose NAS file system.

    • extreme: Extreme NAS file system.

    storageType

    The storage type of the NAS file system.

    • For General-purpose NAS file systems, the valid values are:

      • Performance (default)

      • Capacity

    • For Extreme NAS file systems, the valid values are:

      • standard (default)

      • advance

    regionId

    The region where the NAS file system resides. The region must be the same as the cluster's region.

    zoneId

    The zone where the NAS file system resides.

    Note

    Within the same VPC, you can mount a NAS file system across zones.

    vpcId

    The VPC where the mount target of the NAS file system resides. The VPC must be the same as the cluster's VPC.

    vSwitchId

    The ID of the vSwitch where the mount target of the NAS file system resides.

    accessGroupName

    The permission group of the mount target. The default value is DEFAULT_VPC_GROUP_NAME.

    deleteVolume

    Specifies whether to delete the PV and the corresponding NAS file system and mount target when the PVC is deleted. Since NAS is a shared file system, you must configure both deleteVolume and reclaimPolicy for security purposes.

    provisioner

    The driver type. The value must be set to nasplugin.csi.alibabacloud.com, indicating that the Alibaba Cloud NAS CSI plugin is used.

    reclaimPolicy

    The PV reclaim policy. Only when this parameter is set to Delete and deleteVolume is set to true, the PV and the corresponding NAS file system and mount target are deleted when the PVC is deleted.

  2. Create the StorageClass.

    kubectl create -f alicloud-nas-fs.yaml

Step 3: Create a PVC

  1. Modify the following YAML manifest and save it as nas-pvc-fs.yaml.

    kind: PersistentVolumeClaim
    apiVersion: v1
    metadata:
      name: nas-csi-pvc-fs
    spec:
      accessModes:
        - ReadWriteMany
      storageClassName: alicloud-nas-fs
      resources:
        requests:
          storage: 20Gi
  2. Create the PVC.

    kubectl create -f nas-pvc-fs.yaml

Step 4: Create an application and mount the NAS volume

  1. Modify the following YAML manifest and save it as nas-fs.yaml.

    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: deployment-nas-fs
      labels:
        app: nginx-test
    spec:
      selector:
        matchLabels:
          app: nginx
      template:
        metadata:
          labels:
            app: nginx
        spec:
          containers:
          - name: nginx
            image: anolis-registry.cn-zhangjiakou.cr.aliyuncs.com/openanolis/nginx:1.14.1-8.6
            ports:
            - containerPort: 80
            volumeMounts:
              - name: nas-pvc
                mountPath: "/data"
          volumes:
            - name: nas-pvc
              persistentVolumeClaim:
                claimName: nas-csi-pvc-fs
  2. Create the Deployment.

    kubectl create -f nas-fs.yaml

Verify the shared and persistent storage of NAS

The pods created in the preceding example all mount the same NAS file system. You can use the following steps to verify its behavior:

  • To verify shared storage, create a file from one pod and confirm that it is visible from a second pod.

  • To verify persistence, restart the Deployment and check that the file still exists after the new pods are running.

  1. View the pod information.

    kubectl get pod | grep nas-test

    Sample result:

    nas-test-*****a   1/1     Running   0          40s
    nas-test-*****b   1/1     Running   0          40s
  2. Verify shared storage.

    1. Create a file in a pod.

      In this example, the nas-test-*****a pod is used:

      kubectl exec nas-test-*****a -- touch /data/test.txt
    2. View the file from the other pod.

      In this example, the nas-test-*****b pod is used:

      kubectl exec nas-test-*****b -- ls /data

      Expected output shows that the newly created file test.txt is shared:

      test.txt
  3. Verify persistent storage.

    1. Recreate the Deployment.

      kubectl rollout restart deploy nas-test
    2. Wait until the pods are recreated.

      kubectl get pod | grep nas-test

      Sample result:

      nas-test-*****c   1/1     Running   0          67s
      nas-test-*****d   1/1     Running   0          49s
    3. Log on to a recreated pod and check whether the file still exists in the file system.

      In this example, the nas-test-*****c pod is used:

      kubectl exec nas-test-*****c -- ls /data

      The following output shows that the file still exists in the NAS file system and can be accessed from the mount directory in the recreated pod.

      test.txt

FAQs

If you encounter issues when mounting or using NAS volumes, refer to:

References