You can use the Container Storage Interface (CSI) driver to mount a dynamically provisioned Apsara File Storage NAS (NAS) volume to a Container Service for Kubernetes (ACK) cluster in subpath and filesystem modes. This topic describes how to use a dynamically provisioned NAS volume and how to verify that a NAS volume enables persistent storage and shared storage.

Prerequisites

  • An ACK cluster is created. For more information, see Create a managed Kubernetes cluster.
  • A NAS file system is created. For more information, see Create a NAS file system.

    If you want to encrypt data in a NAS volume, configure the encryption settings when you create the NAS file system.

  • A mount target is created for the NAS file system. For more information, see Manage mount targets.

    The mount target and the cluster node to which you want to mount the NAS file system must belong to the same virtual private cloud (VPC).

Scenarios

  • Your application requires high disk I/O.
  • You need a storage service that offers higher read and write throughput than Object Storage Service (OSS).
  • You want to share files across hosts. For example, you want to use a NAS file system as a file server.

Usage notes

  • To mount an Extreme NAS file system, set the path parameter of the StorageClass to a subdirectory of /share. For example, a value of 0cd8b4a576-g****.cn-hangzhou.nas.aliyuncs.com:/share/subpath indicates that the mounted subdirectory of the NAS file system is /share/subpath.
  • If a NAS file system is mounted to multiple pods, the data in the file system is shared by the pods. In this case, the application must be able to synchronize data across these pods when data modifications are made by multiple pods.
    Note You cannot grant permissions to access the / directory (root directory) of the NAS file system. The user account and user group to which the directory belongs cannot be modified.
  • If the securityContext.fsgroup parameter is set in the application template, kubelet performs the chmod or chown operation after the volume is mounted, which increases the time consumption.

Use the console to create a dynamically provisioned NAS volume

In the ACK console, you can create only dynamically provisioned NAS volumes in subpath mode. To create dynamically provisioned NAS volumes in filesystem mode, you must use kubectl.

Step 1: Create a StorageClass

  1. Log on to the ACK console.
  2. In the left-side navigation pane of the ACK console, click Clusters.
  3. Log on to the ACK console.
  4. In the left-side navigation pane of the details page, choose Volumes > StorageClasses.
  5. On the StorageClasses page, click Create in the upper-right corner.
  6. In the Create dialog box, set the parameters.
    The following table describes some of the parameters.
    Parameter Description
    Name The name of the StorageClass.

    The name can contain lowercase letters, digits, periods (.), and hyphens (-). It must start with a lowercase letter.

    PV Type Select Cloud Disk or NAS. In this example, NAS is selected.
    Volume Plug-in By default, CSI is selected.
    Reclaim Policy The reclaim policy of the disk. By default, this parameter is set to Delete. You can also set this parameter to Retain.
    • Delete mode: When a persistent volume claim (PVC) is deleted, the related PV and NAS file system are deleted.
    • Retain mode: When a PVC is deleted, the related PV and NAS file system are retained and can only be manually deleted.

    If you require higher data security, we recommend that you use the Retain mode to prevent data loss caused by user errors.

    Mount Options Optional parameters for mounting NAS volumes. You can specify the Network File System (NFS) version that you want to use.
    Mount Target Domain Name The mount target of the NAS file system that you want to mount.

    If no mount target is available, you must create a NAS file system. For more information, see Use CNFS to manage NAS file systems.

    Path The path of the NAS file system that you want to mount.
  7. After you set the parameters, click Create.
    After the StorageClass is created, you can view it in the StorageClass list.

Step 2: Create a PVC

  1. In the left-side navigation pane of the details page, choose Volumes > Persistent Volume Claims.
  2. On the Persistent Volume Claims page, click Create in the upper-right corner.
  3. In the Create PVC dialog box, set the following parameters.
    Parameter Description
    PVC Type You can select Cloud Disk, NAS, or OSS. In this example, NAS is selected.
    Name The name of the PVC. The name must be unique in the cluster.
    Allocation Mode In this example, Use StorageClass is selected.
    Existing Storage Class Click Select. In the Select Storage Class dialog box, find the StorageClass that you created and click Select in the Actions column.
    Capacity The capacity of the PV that you want to create.
    Access Mode Default value: ReadWriteOnce. You can also set the value to ReadOnlyMany or ReadWriteMany.
  4. Click Create.
    After the PVC is created, you can view it in the PVC list. The PVC is bound to a PV.

Step 3: Create an application

  1. In the left-side navigation pane of the details page, choose Workloads > Deployments.
  2. In the upper-right corner of the Deployments page, click Create from Image.
  3. Set the application parameters.
    This example shows how to set the volume parameters. For more information about other parameters, see Create a stateless application by using a Deployment.
    You can add local volumes and cloud volumes.
    • Add Local Storage: You can select HostPath, ConfigMap, Secret, or EmptyDir from the PV Type drop-down list. Then, set the Mount Source and Container Path parameters to mount the volume to a container path. For more information, see Volumes.
    • Add PVC: You can add cloud volumes.
    In this example, a NAS volume is specified as the mount source and mounted to the /tmp path in the container.
    Volumes
  4. Set the other parameters and click Create.
    After the application is created, you can use the NAS volume to store application data.

Use kubectl to create a dynamically provisioned NAS volume in subpath mode

The subpath mode is applicable to scenarios where you want to share a NAS volume among different applications or pods. You can also use this mode to mount different subdirectories of the same NAS file system to different pods.

To mount a dynamically provisioned NAS volume in subpath mode, you must manually create a NAS file system and a mount target.

  1. Create a NAS file system and a mount target.
    1. Log on to the NAS console.
    2. Create a NAS file system. For more information, see Create a NAS file system.
    3. Create a mount target. For more information, see Manage mount targets.
  2. Create a StorageClass.
    1. Create an alicloud-nas-subpath.yaml file and copy the following content into the file:
      apiVersion: storage.k8s.io/v1
      kind: StorageClass
      metadata:
        name: alicloud-nas-subpath
      mountOptions:
      - nolock,tcp,noresvport
      - vers=3
      parameters:
        volumeAs: subpath
        server: "0cd8b4a576-g****.cn-hangzhou.nas.aliyuncs.com:/k8s/"
      provisioner: nasplugin.csi.alibabacloud.com
      reclaimPolicy: Retain
      Parameter Description
      mountOptions Set the options parameter and specify the NFS version in the mountOptions field.
      volumeAs Valid values: subpath or filesystem. A value of subpath indicates that a subdirectory is mounted to the cluster. A value of filesystem indicates that a file system is mounted to the cluster.
      server When you mount a subdirectory of the NAS file system as a persistent volume (PV), this parameter specifies the mount target of the NAS file system.
      provisioner The type of driver. In this example, the parameter is set to nasplugin.csi.alibabacloud.com. This indicates that the NAS CSI plug-in provided by Alibaba Cloud is used.
      reclaimPolicy The reclaim policy of the PV. By default, this parameter is set to Delete. You can also set this parameter to Retain.
      • Delete mode: When a PVC is deleted, the related PV and NAS file system are deleted.
      • Retain mode: When a PVC is deleted, the related PV and NAS file system are retained and can only be manually deleted.
      If you require higher data security, we recommend that you use the Retain mode to prevent data loss caused by user errors.
      archiveOnDelete This parameter specifies the reclaim policy of backend storage when reclaimPolicy is set to Delete. NAS is a shared storage service. You must set both reclaimPolicy and archiveOnDelete to ensure data security. Configure the policy in the parameters section. The default value is true, which indicates that the subdirectory or files are not deleted when the PV is deleted. Instead, the subdirectory or files are renamed in the format of archived-{pvName}.{timestamp}. If the value is set to false, it indicates that the backend storage is deleted when the PV is deleted.
      Note We recommend that you do not set the value to false when your service receives a large amount of network traffic. For more information, see What do I do if the task queue of alicloud-nas-controller is full and PVs cannot be created when I use a dynamically provisioned NAS volume?.
    2. Run the following command to create a StorageClass:
      kubectl create -f alicloud-nas-subpath.yaml
  3. Create a PVC.
    1. Create a pvc.yaml file and copy the following content into the file:
      kind: PersistentVolumeClaim
      apiVersion: v1
      metadata: 
        name: nas-csi-pvc
      spec:
        accessModes:
        - ReadWriteMany 
        storageClassName: alicloud-nas-subpath
        resources: 
          requests:
            storage: 20Gi
      Parameter Description
      name The name of the PVC.
      accessModes The access mode of the PVC.
      storageClassName The name of the StorageClass that you want to associate with the PVC.
      storage The capacity claimed by the PVC.
    2. Run the following command to create a PVC:
      kubectl create -f pvc.yaml
  4. Create applications.
    Deploy two applications named nginx-1 and nginx-2 to share the same subdirectory of the NAS file system.
    1. Create an nginx-1.yml file and copy the following content into the file:
      apiVersion: apps/v1
      kind: Deployment
      metadata:
        name: deployment-nas-1
        labels:
          app: nginx-1
      spec:
        selector:
          matchLabels:
            app: nginx-1
        template:
          metadata:
            labels:
              app: nginx-1
          spec:
            containers:
            - name: nginx
              image: nginx:1.7.9
              ports:
              - containerPort: 80
              volumeMounts:
                - name: nas-pvc
                  mountPath: "/data"
            volumes:
              - name: nas-pvc
                persistentVolumeClaim:
                  claimName: nas-csi-pvc
      • mountPath: the mount path in the container where the NAS volume is mounted.
      • claimName: the name of the PVC that is mounted to the application. In this example, the value is set to nas-csi-pvc.
    2. Create an nginx-2.yml file and copy the following content into the file:
      apiVersion: apps/v1
      kind: Deployment
      metadata:
        name: deployment-nas-2
        labels:
          app: nginx-2
      spec:
        selector:
          matchLabels:
            app: nginx-2
        template:
          metadata:
            labels:
              app: nginx-2
          spec:
            containers:
            - name: nginx
              image: nginx:1.7.9
              ports:
              - containerPort: 80
              volumeMounts:
                - name: nas-pvc
                  mountPath: "/data"
            volumes:
              - name: nas-pvc
                persistentVolumeClaim:
                  claimName: nas-csi-pvc
      • mountPath: the mount path in the container where the NAS volume is mounted. In this example, the value is set to /data.
      • claimName: Enter the name of the PVC that is mounted to nginx-1. In this example, the value is set to nas-csi-pvc.
    3. Run the following command to deploy applications nginx-1 and nginx-2:
      kubectl create -f nginx-1.yaml -f nginx-2.yaml
  5. Run the following command to query pods:
    kubectl get pod

    Expected output:

    NAME                                READY   STATUS    RESTARTS   AGE
    deployment-nas-1-5b5cdb85f6-n****   1/1     Running   0          32s
    deployment-nas-2-c5bb4746c-4****    1/1     Running   0          32s
    Note The subdirectory 0cd8b4a576-g****.cn-hangzhou.nas.aliyuncs.com:/share/nas-79438493-f3e0-11e9-bbe5-00163e09**** of the NAS volume is mounted to the /data directory of pods deployment-nas-1-5b5cdb85f6-n**** and deployment-nas-2-c5bb4746c-4****. The following information is displayed:
    • /share: the subdirectory is mounted in subpath mode as specified in the StorageClass configurations.
    • nas-79438493-f3e0-11e9-bbe5-00163e09****: the name of the PV.

    To mount different subdirectories of a NAS file system to different pods, you must create a separate PVC for each pod. To do this, you can create pvc-1 for nginx-1 and pvc-2 for nginx-2.

Use kubectl to create a dynamically provisioned NAS volume in filesystem mode

Notice By default, if you delete a PV that is mounted in filesystem mode, the system retains the related NAS file system and mount target. To delete the NAS file system and mount target together along with the PV, set reclaimPolicy to Delete and deleteVolume to true in the StorageClass configurations.

The filesystem mode is applicable to scenarios where you want to dynamically create and delete NAS file systems and mount targets.

When you mount a NAS volume in filesystem mode, you can create only one NAS file system and one mount target for each pod. You cannot share a NAS volume among multiple pods. The following procedure demonstrates how to mount a dynamically provisioned NAS volume in filesystem mode.

  1. Configure a Resource Access Management (RAM) policy and attach it to a RAM role.
    The filesystem mode allows you to dynamically create and delete NAS file systems and mount targets. To enable this feature, you must grant the required permissions to csi-nasprovisioner. The following code block shows a RAM policy that contains the required permissions:
    {
        "Action": [
            "nas:DescribeMountTargets",
            "nas:CreateMountTarget",
            "nas:DeleteFileSystem",
            "nas:DeleteMountTarget",
            "nas:CreateFileSystem"
        ],
        "Resource": [
            "*"
        ],
            "Effect": "Allow"
    }

    You can grant the permissions by using one of the following methods:

    • Add the required permissions to the RAM policy attached to the master RAM role of your ACK cluster. For more information, see ACK default roles. Attach a custom RAM policy
      Note The master RAM role is automatically assigned to a managed Kubernetes cluster. However, for a dedicated Kubernetes cluster, you must manually assign the master RAM role.
    • Create a RAM user and attach the preceding RAM policy to the RAM user. Then, generate an AccessKey pair and specify the AccessKey pair in the env variable in the configurations of the csi-provisioner configurations. For more information, see ACK default roles.
      env:
          - name: CSI_ENDPOINT
              value: unix://socketDir/csi.sock
          - name: ACCESS_KEY_ID
              value: ""
          - name: ACCESS_KEY_SECRET
              value: ""
  2. Create a StorageClass.
    1. Create an alicloud-nas-fs.yaml file and copy the following content into the file:
      apiVersion: storage.k8s.io/v1
      kind: StorageClass
      metadata:
        name: alicloud-nas-fs
      mountOptions:
      - nolock,tcp,noresvport
      - vers=3
      parameters:
        volumeAs: filesystem
        storageType: Performance
        zoneId: cn-hangzhou-a
        vpcId: "vpc-2ze9c51qb5kp1nfqu****"
        vSwitchId: "vsw-gw8tk6gecif0eu9ky****"
        accessGroupName: DEFAULT_VPC_GROUP_NAME
        deleteVolume: "false"
      provisioner: nasplugin.csi.alibabacloud.com
      reclaimPolicy: Retain
      Parameter Description
      volumeAs The mode in which the NAS volume is mounted. Valid values:
      • filesystem: csi-nasprovisioner automatically creates a NAS file system. Each PV corresponds to a NAS file system.
      • subpath: csi-nasprovisioner automatically creates a subdirectory in a NAS file system. Each PV corresponds to a subdirectory of the NAS file system.
      storageType The type of NAS file system. You can select Performance or Capacity. Default value: Performance.
      zoneId The ID of the zone to which the NAS file system belongs.
      vpcId The ID of the VPC to which the mount target of the NAS file system belongs.
      vSwitchId The ID of the vSwitch to which the mount target of the NAS file system belongs.
      accessGroupName The permission group to which the mount target of the NAS file system belongs. Default value: DEFAULT_VPC_GROUP_NAME.
      deleteVolume The reclaim policy of the NAS file system when the related PV is deleted. NAS is a shared storage service. Therefore, you must specify both deleteVolume and reclaimPolicy to ensure data security.
      provisioner The type of driver. In this example, the parameter is set to nasplugin.csi.alibabacloud.com. This indicates that the NAS CSI plug-in provided by Alibaba Cloud is used.
      reclaimPolicy The reclaim policy of the PV. When you delete a PVC, the related NAS file system is automatically deleted only if you set deleteVolume to true and reclaimPolicy to Delete.
    2. Run the following command to create a StorageClass:
      kubectl create -f alicloud-nas-fs.yaml
  3. Create a PVC and pods to mount a NAS volume.
    1. Create a pvc.yaml file and copy the following content into the file:
      kind: PersistentVolumeClaim
      apiVersion: v1
      metadata:
        name: nas-csi-pvc-fs
      spec:
        accessModes:
          - ReadWriteMany
        storageClassName: alicloud-nas-fs
        resources:
          requests:
            storage: 20Gi
    2. Create an nginx.yaml file and copy the following content into the file:
      apiVersion: apps/v1
      kind: Deployment
      metadata:
        name: deployment-nas-fs
        labels:
          app: nginx
      spec:
        selector:
          matchLabels:
            app: nginx
        template:
          metadata:
            labels:
              app: nginx
          spec:
            containers:
            - name: nginx
              image: nginx:1.7.9
              ports:
              - containerPort: 80
              volumeMounts:
                - name: nas-pvc
                  mountPath: "/data"
            volumes:
              - name: nas-pvc
                persistentVolumeClaim:
                  claimName: nas-csi-pvc-fs
    3. Run the following command to create the PVC and pods:
      kubectl create -f pvc.yaml -f nginx.yaml

In filesystem mode, the CSI driver automatically creates a NAS file system and a mount target when you create the PVC. When the PVC is deleted, the file system and the mount target are retained or deleted based on the settings of the deleteVolume and reclaimPolicy parameters.

Verify that the NAS file system can be used to persist data

Data is persisted in the NAS file system. After a pod is deleted and recreated, the data in the file system is the same as before the pod is deleted.

Perform the following steps to verify that data is persisted to the NAS file system.

  1. Query the pods that run the application and the files in the mounted NAS file system.
    1. Run the following command to query the pods that run the application:
      kubectl get pod 

      Expected output:

      NAME                                READY   STATUS    RESTARTS   AGE
      deployment-nas-1-5b5cdb85f6-n****   1/1     Running   0          32s
      deployment-nas-2-c5bb4746c-4****    1/1     Running   0          32s
    2. Run the following command to query files in the /data path of a pod. The pod named deployment-nas-1-5b5cdb85f6-n**** is used as an example:
      kubectl exec deployment-nas-1-5b5cdb85f6-n**** -- ls /data
      No output is returned. This indicates that no file exists in the /data path.
  2. Run the following command to create a file named nas in the /data path of the pod deployment-nas-1-5b5cdb85f6-n****:
    kubectl exec deployment-nas-1-5b5cdb85f6-n**** -- touch /data/nas
  3. Run the following command to query files in the /data path of the pod deployment-nas-1-5b5cdb85f6-n****:
    kubectl exec deployment-nas-1-5b5cdb85f6-n**** -- ls /data

    Expected output:

    nas
  4. Run the following command to delete the pod:
    kubectl delete pod deployment-nas-1-5b5cdb85f6-n****
  5. Open another kubectl CLI and run the following command to view how the pod is deleted and recreated:
    kubectl get pod -w -l app=nginx
  6. Verify that the file still exists after the pod is deleted.
    1. Run the following command to query the name of the recreated pod:
      kubectl get pod

      Expected output:

      NAME                                READY   STATUS    RESTARTS   AGE
      deployment-nas-1-5b5cdm2g5-m****    1/1     Running   0          32s
      deployment-nas-2-c5bb4746c-4****    1/1     Running   0          32s
    2. Run the following command to query files in the /data path of the pod deployment-nas-1-5b5cdm2g5-m****:
      kubectl exec deployment-nas-1-5b5cdm2g5-m**** -- ls /data

      Expected output:

      nas
      The nas file still exists in the /data path. This indicates that data is persisted to the NAS file system.

Verify that data in the NAS file system can be shared across pods

A NAS file system can be mounted to multiple pods at the same time. Modifications to data in the file system performed by one pod are automatically synchronized to other pods.

Perform the following steps to verify that the NAS file system can be shared across pods.

  1. Query the pods that run the application and the files in the mounted NAS file system.
    1. Run the following command to query the pods that run the application:
      kubectl get pod 

      Expected output:

      NAME                                READY   STATUS    RESTARTS   AGE
      deployment-nas-1-5b5cdb85f6-n****   1/1     Running   0          32s
      deployment-nas-2-c5bb4746c-4****    1/1     Running   0          32s
    2. Run the following command to query files in the /data path of each pod:
      kubectl exec deployment-nas-1-5b5cdb85f6-n**** -- ls /data
      kubectl exec deployment-nas-2-c5bb4746c-4**** -- ls /data
  2. Run the following command to create a file named nas in the /data path of a pod:
     kubectl exec deployment-nas-1-5b5cdb85f6-n**** -- touch /data/nas
  3. Query files in the /data path of each pod.
    1. Run the following command to query files in the /data path of the pod deployment-nas-1-5b5cdb85f6-n****:
      kubectl exec deployment-nas-1-5b5cdb85f6-n**** -- ls /data

      Expected output:

      nas
    2. Run the following command to query files in the /data path of the pod deployment-nas-2-c5bb4746c-4****:
      kubectl exec deployment-nas-2-c5bb4746c-4**** -- ls /data

      Expected output:

      nas
      After you create a file in the /data path of one pod, you can also find the file you created in the /data path of the other pod. This indicates that data in the NAS file system is shared by the two pods.