If you want to share an Apsara File Storage NAS (NAS) volume to multiple Kubernetes
applications or pods, you can use Container Network File System (CNFS) to mount a
dynamically provisioned NAS volume in sharepath mode. This topic describes how to
use CNFS to share a dynamically provisioned NAS volume.
Prerequisites
- A Container Service for Kubernetes (ACK) cluster is created. The Container Storage
Interface (CSI) plug-in is used as the volume plug-in. For more information, see Create a managed Kubernetes cluster.
- If you want to create a new cluster, you must select Dynamically Provision Volumes by Using the Default NAS File Systems and CNFS when you select CSI as the volume plug-in.
- If you want to use an existing cluster that does not have Dynamically Provision Volumes by Using the Default NAS File Systems and CNFS selected when the cluster is created, you must use CNFS to manage NAS file systems.
For more information, see Use CNFS to manage NAS file systems.
- The versions of csi-plugin and csi-provisioner are 1.20.5-ff6490f-aliyun or later.
For more information about how to upgrade csi-plugin and csi-provisioner, see Upgrade CSI-Plugin and CSI-Provisioner.
- A kubectl client is connected to the cluster. For more information, see Connect to ACK clusters by using kubectl.
Procedure
You must manually create a sharepath type StorageClass and set containerNetworkFileSystem to the name of the CNFS file system that you want to use.
- Run the following command to query the name of the CNFS file system:
kubectl get cnfs
Expected output:
NAME AGE
default-cnfs-nas-837d6ea-20210819155623 17d
- Create a StorageClass.
- Use the following YAML template to create a storageclass.yaml file:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: alibabacloud-cnfs-nas-sharepath
mountOptions:
- nolock,tcp,noresvport
- vers=3
parameters:
volumeAs: "sharepath"
containerNetworkFileSystem: "default-cnfs-nas-837d6ea-20210819155623"
path: "/sharepath"
provisioner: nasplugin.csi.alibabacloud.com
reclaimPolicy: Retain
allowVolumeExpansion: true
Notice You must set volumeAs to sharepath and reclaimPolicy to Retain. If you set reclaimPolicy to Delete, the persistent volume claim (PVC) that you create for dynamic provisioning may remain
in the Pending state.
- Run the following command to create a StorageClass:
kubectl create -f storageclass.yaml
- Create PVC 1 and Deployment 1 that uses PVC 1.
- Use the following YAML template to create a nas-sharepath1.yaml file:
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: pvc-nas-sharepath1
spec:
accessModes:
- ReadWriteMany
storageClassName: alibabacloud-cnfs-nas-sharepath
resources:
requests:
storage: 10Gi
apiVersion: apps/v1
kind: Deployment
metadata:
name: deployment-nas-sharepath1
labels:
app: nginx
spec:
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.14.2
ports:
- containerPort: 80
volumeMounts:
- name: pvc-nas-sharepath1
mountPath: "/data"
volumes:
- name: pvc-nas-sharepath1
persistentVolumeClaim:
claimName: pvc-nas-sharepath1
- Run the following command to create PVC 1 and Deployment 1 that uses PVC 1:
kubectl create -f nas-sharepath1.yaml
- Create a file in the mount directory of Deployment1.
- Run the following command to query Deployment 1:
kubectl get pod
Expected output:
NAME READY STATUS RESTARTS AGE
deployment-nas-sharepath1-586686b789-v4bwt 1/1 Running 0 6m
- Run the following command to create a test.txt file in the /data directory of Deployment 1 and write
hello world
into the file: kubectl exec deployment-nas-sharepath1-586686b789-v4bwt -ti sh
cd /data
echo "hello world" > test.txt
- Create PVC 2 and Deployment 2 that uses PVC 2.
- Use the following command to create a nas-sharepath2.yaml file:
Note
- We recommend that you set the storageClassName parameter of PVC 2 to the same value as the storageClassName parameter of PVC 1. In this topic, both storageClassName parameters of PVC 1 and PVC 2 are set to
alibabacloud-cnfs-nas-sharepath
.
- The mountPath parameters of Deployment 1 and Deployment 2 must be set to the same value. This allows
you to share the same NAS subdirectory with Deployment 1 and Deployment 2. In this
example, both mountPath parameters are set to /data.
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: pvc-nas-sharepath2
spec:
accessModes:
- ReadWriteMany
storageClassName: alibabacloud-cnfs-nas-sharepath
resources:
requests:
storage: 10Gi
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: deployment-nas-sharepath2
labels:
app: nginx
spec:
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.14.2
ports:
- containerPort: 80
volumeMounts:
- name: pvc-nas-sharepath2
mountPath: "/data"
volumes:
- name: pvc-nas-sharepath2
persistentVolumeClaim:
claimName: pvc-nas-sharepath2
- Run the following command to create PVC 2 and Deployment 2 that uses PVC 2:
kubectl create -f nas-sharepath2.yaml
- Verify that the files of Deployment 1 can be found in the mount path of Deployment
2.
- Run the following command to query Deployment 2:
kubectl get pod
Expected output:
NAME READY STATUS RESTARTS AGE
deployment-nas-sharepath2-74b85f4c86-tj9xz 1/1 Running 0 6m
- Run the following command to log on to the /data directory of Deployment 2:
kubectl exec deployment-nas-sharepath2-74b85f4c86-tj9xz -ti sh
cd /data
- Run the following command to query files in the /data directory:
ls -arlth
Expected output:
total 8.5K
drwxr-xr-x 2 root root 4.0K Sep 6 03:56 .
-rw-r--r-- 1 root root 14 Sep 6 03:56 test.txt
drwxr-xr-x 1 root root 4.0K Sep 6 03:57 ..
The expected output indicates that a test.txt file can be found in the /data directory of Deployment 2.
- Run the following command to query the content of the test.txt file:
cat test.txt
Expected output:
hello world
The expected output indicates that the content of the test.txt file found in the /data directory of Deployment 2 is the same as the content that you previously wrote into
the test.txt file in the /data directory of Deployment 1.
- Delete Deployment 2, PVC 2, and the related persistent volume (PV).
kubectl delete deployment/deployment-nas-sharepath2 pvc/pvc-nas-sharepath2 pv/nas-daf9f0f0-16cf-412c-9a81-ad8b53490293
Expected output:
deployment.apps "deployment-nas-sharepath2" deleted
persistentvolumeclaim "pvc-nas-sharepath2" deleted
persistentvolume "nas-daf9f0f0-16cf-412c-9a81-ad8b53490293" deleted
- Verify that the content of the test.txt file in the /data directory of Deployment 1 still exists.
- Run the following command to log on to the /data directory of Deployment 1:
kubectl exec deployment-nas-sharepath1-586686b789-v4bwt -ti sh
cd /data
- Run the following command to query files in the /data directory:
ls
Expected output:
test.txt
The output indicates that the test.txt file still exists in the /data directory of Deployment 1:
- Run the following command to query the content of the test.txt file:
cat test.txt
Expected output:
hello world
The output indicates that the NAS subdirectory is still mounted to Deployment 1 after
Deployment 2, PVC 2, and the related PV are deleted.