Container Service for Kubernetes (ACK) allows you to mount and use isolated Apsara File Storage NAS (NAS) volumes that are managed by Container Network File System (CNFS). Each isolated NAS volume is mapped to a directory in a NAS file system managed by CNFS. These volumes are independent of and isolated from each other. When you want to mount different directories in a NAS file system to multiple Kubernetes applications or pods, you can use CNFS to manage isolated NAS volumes. This topic describes how to use CNFS to manage isolated NAS volumes.
Prerequisites
- A Container Service for Kubernetes (ACK) cluster that runs Kubernetes 1.20 or later is created. The Container Storage Interface (CSI) plug-in is used as the volume plug-in. Fore more information, see Create an ACK managed cluster.
- The versions of csi-plugin and csi-provisioner are v1.22.11-abbb810e-aliyun or later. For more information about how to update csi-plugin and csi-provisioner, see Install and upgrade the CSI plug-in.
- The version of storage-operator is v1.22.86-041b094-aliyun or later. For more information about how to update storage-operator, see Manage system components.
- A kubectl client is connected to the cluster. Fore more information, see Obtain the kubeconfig file of a cluster and use kubectl to connect to the cluster.
Background information
For more information about CNFS and how to use CNFS to manage NAS file systems, see CNFS overview and Use CNFS to manage NAS file systems.
Step 1: Create a workload for the isolated NAS volume
- Create a StorageClass named
cnfs-nas-sc
and reference a CNFS object namedcnfs-nas-filesystem
in the persistent volume (PV). - Create a StatefulSet named
cnfs-nas-dynamic-sts
.- In the StatefulSet, use volumeClaimTemplates to create a persistent volume claim (PVC) named
pvc-cnfs-nas-dynamic-sts-0
. - In the StatefulSet, use a BusyBox image to mount the PV and write a temporary file named
1G.tmpfile
(1 GB in size) to the mount target.
- In the StatefulSet, use volumeClaimTemplates to create a persistent volume claim (PVC) named
cat << EOF | kubectl apply -f -
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: cnfs-nas-sc
mountOptions:
- nolock,tcp,noresvport
- vers=3
parameters:
volumeAs: subpath
containerNetworkFileSystem: cnfs-nas-filesystem # Reference the CNFS object named cnfs-nas-filesystem.
path: "/"
archiveOnDelete: "false"
provisioner: nasplugin.csi.alibabacloud.com
reclaimPolicy: Delete
allowVolumeExpansion: true
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: cnfs-nas-dynamic-sts
labels:
app: busybox
spec:
serviceName: "busybox"
replicas: 2
selector:
matchLabels:
app: busybox
template:
metadata:
labels:
app: busybox
spec:
containers:
- name: busybox
image: busybox
command: ["/bin/sh"]
args: ["-c", "dd if=/dev/zero of=/data/1G.tmpfile bs=1G count=1;sleep 3600;"]
volumeMounts:
- mountPath: "/data"
name: pvc
volumeClaimTemplates:
- metadata:
name: pvc
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: "cnfs-nas-sc" # Reference the StorageClass named cnfs-nas-sc.
resources:
requests:
storage: 50Gi
EOF
Step 2: View the mount result
kubectl exec cnfs-nas-dynamic-sts-0 -- mount |grep nfs
Expected output:971134b0e8-****.cn-zhangjiakou.nas.aliyuncs.com:/nas-95115c94-2ceb-4a83-b4f4-37bd35df**** on /data type nfs (rw,nosuid,nodev,relatime,user_id=0,group_id=0,default_permissions,allow_other)
The output indicates that the volume is mounted. Step 3: Check whether data is persisted to the volume
kubectl exec cnfs-nas-dynamic-sts-0 -- ls -arlth /data
Expected output:total 1G
-rw-r--r-- 1 root root 1.0G Dec 15 12:11 1G.tmpfile
The output indicates that the 1G.tmpfile
file is written to the /data directory. Step 4: Check whether the file is written to the isolated NAS volumes of other pods
Run the following command to check whether the temporary file exists in the pod named cnfs-nas-dynamic-sts-1:
kubectl exec cnfs-nas-dynamic-sts-1 -- ls -arlth /data
Expected output:
sh-4.4# ls -arlth
total 8.0K
drwxr-xr-x 1 root root 4.0K Dec 15 18:07 ..
drwxr-xr-x 2 root root 4.0K Dec 15 18:07 .
The output indicates that the 1G.tmpfile
file only exists in the pod named cnfs-nas-dynamic-sts-0. The file cannot be found in the pod named cnfs-nas-dynamic-sts-1.