In Alibaba Cloud Container Service for Kubernetes (ACK), you can use Container Network File System (CNFS) to provide storage for applications from a shared General-purpose Cloud Paralleled File System (CPFS) as dynamically provisioned volumes. After you configure a StorageClass, workloads are automatically provisioned with independent persistent volumes (PVs) that are based on isolated CPFS subdirectories. This achieves data isolation and simplifies storage management.
Prerequisites
Your cluster must meet the following requirements based on its version.
For more information, see Upgrade a cluster.
1.26 and later
The Container Storage Interface (CSI) component is version v1.32.2-757e24b-aliyun or later.
For more information, see Manage CSI components.
The cnfs-nas-daemon component is installed. The
AlinasMountProxy=truefeature gate is added to the csi-plugin component to enable cnfs-nas-daemon. For more information, see Manage the cnfs-nas-daemon component.
Earlier than 1.26
The CSI component is version v1.24.11-5221f79-aliyun or later.
For more information, see Manage CSI components.
Client dependencies are installed and the csi-plugin component is restarted.
A General-purpose CPFS file system and a corresponding protocol service have been created in the same Virtual Private Cloud (VPC) as the cluster, and the mount target address has been obtained. For more information, see Create a protocol service and obtain a mount target address.
For optimal performance, create the mount target of the CPFS protocol service and the cluster in the same vSwitch.
Step 1: Use an NFS client to mount the CPFS file system
Create a CNFS-managed CPFS file system.
The following command creates a CNFS object and a StorageClass. This configuration allows the NFS client to mount the CPFS file system as an isolated volume through CNFS.
cat << EOF | kubectl apply -f - apiVersion: storage.alibabacloud.com/v1beta1 kind: ContainerNetworkFileSystem metadata: name: cnfs-nfs-cpfs spec: type: cpfs reclaimPolicy: Retain parameters: protocolServer: cpfs-xxxx.xxxx.cpfs.aliyuncs.com # Enter the domain name of the mount target for the CPFS protocol service. useClient: NFSClient # Use the NFS client to mount. --- apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: cnfs-nfs-cpfs-sc mountOptions: - nolock,tcp,noresvport - vers=3 parameters: volumeAs: subpath containerNetworkFileSystem: "cnfs-nfs-cpfs" # Reference the created CNFS object cnfs-nfs-cpfs. path: "/share" archiveOnDelete: "true" provisioner: nasplugin.csi.alibabacloud.com reclaimPolicy: Retain allowVolumeExpansion: true EOFCNFS
Parameter
Description
typeThe type of volume to create. In this example, it is
cpfs.reclaimPolicyThe reclaim policy. Only
Retainis supported. The CPFS file system is not deleted when the CNFS object is deleted.parametersprotocolServerThe domain name of the mount target for the General-purpose CPFS file system's protocol service.
useClientSet to
NFSClient, which indicates that the NFS client is used for mounting.StorageClass
Parameter
Description
mountOptionsThe mount parameters. You can use the default values in this topic's example.
parametersvolumeAsSet to
subpathto create a subdirectory-type PV.containerNetworkFileSystemThe name of the associated CNFS object.
pathThe path that corresponds to the export directory of the General-purpose CPFS protocol service, such as
/share. You can also set it to a subdirectory, such as/share/dir.Currently, only regular directories are supported. Filesets are not supported.
parameters.archiveOnDeleteSpecifies whether the data in the backend storage is actually deleted when the PersistentVolumeClaim (PVC) is deleted and the
reclaimPolicyis set toDelete.General-purpose CPFS is a shared storage. This option is provided for double confirmation.
true(default): The directory or file is not deleted. Instead, it is archived and renamed toarchived-{pvName}.{timestamp}.false: The corresponding backend directory and its data are deleted.This operation deletes the CPFS subpath directory and the files within it. The CPFS file system itself is not deleted.
To delete the CPFS file system, see Delete a file system.
reclaimPolicyThe reclaim policy of the PV.
Delete: When the PVC is deleted, the backend storage data is processed based on thearchiveOnDeletesetting.Retain: When the PVC is deleted, the PV and the CPFS file system are not deleted. You must delete them manually. This policy is suitable for scenarios that require high data security to prevent accidental data deletion.
allowVolumeExpansionOptional. Allows scaling out the CPFS volume.
Create a StatefulSet named
cnfs-nfs-cpfs-stsand mount the isolated CPFS volume.cat << EOF | kubectl apply -f - apiVersion: apps/v1 kind: StatefulSet metadata: name: cnfs-nfs-cpfs-sts spec: selector: matchLabels: app: nginx serviceName: "nginx" replicas: 2 template: metadata: labels: app: nginx spec: containers: - name: nginx image: anolis-registry.cn-zhangjiakou.cr.aliyuncs.com/openanolis/nginx:1.14.1-8.6 volumeMounts: - name: pvc mountPath: /data volumeClaimTemplates: - metadata: name: pvc spec: accessModes: [ "ReadWriteOnce" ] storageClassName: "cnfs-nfs-cpfs-sc" # The name of the attached StorageClass. resources: requests: storage: 50Gi EOFCheck the CNFS status.
kubectl get cnfs cnfs-nfs-cpfs -o jsonpath='{.status.status}'The following output is returned. The
Availablestatus indicates that the CNFS object is active.AvailableCheck the status of the persistent volume claims (PVCs).
kubectl get pvc -o wide | grep cnfs-nfs-cpfs-scThe following output is returned. Two PVCs are automatically created and attached to the automatically created PVs.
pvc-cnfs-nfs-cpfs-sts-0 Bound nas-804e8cb1-2355-4026-87fc-ee061e14f5f9 50Gi RWO cnfs-nfs-cpfs-sc <unset> 5m36s Filesystem pvc-cnfs-nfs-cpfs-sts-1 Bound nas-00baf7ff-75dc-440d-bab1-ea8872f1adea 50Gi RWO cnfs-nfs-cpfs-sc <unset> 5m25s FilesystemCheck the pod status.
kubectl get pod | grep cnfs-nfs-cpfs-stsExpected output:
NAME READY STATUS RESTARTS AGE cnfs-nfs-cpfs-sts-0 1/1 Running 0 9m22s cnfs-nfs-cpfs-sts-1 1/1 Running 0 9m11sConfirm that the pod has mounted the CPFS volume.
kubectl exec cnfs-nfs-cpfs-sts-0 -- mount | grep nfsThe following output is returned. It indicates that CNFS successfully mounted the CPFS file system using the NFS client.
cpfs-********-********.cn-shanghai.cpfs.aliyuncs.com:/share/nas-804e8cb1-2355-4026-87fc-ee061e14f5f9 on /data type nfs (rw,relatime,vers=3,rsize=1048576,wsize=1048576,namlen=255,hard,nolock,noresvport,proto=tcp,port=30000,timeo=600,retrans=2,sec=sys,mountaddr=127.0.1.255,mountvers=3,mountport=30000,mountproto=tcp,local_lock=all,addr=127.0.1.255)
Step 2: Verify that the volumes are isolated
Write a 1 GB temporary file to one of the pods, such as
cnfs-xxx-cpfs-sts-0, and check whether the write operation is successful.Replace
cnfs-xxx-cpfs-sts-0with the actual pod name.kubectl exec cnfs-xxx-cpfs-sts-0 -- sh -c "dd if=/dev/zero of=/data/1G.tmpfile bs=1G count=1;ls -alrth /data"Expected output:
1+0 records in 1+0 records out 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 2.2487 s, 477 MB/s total 1.1G drwxr-xr-x 1 root root 4.0K Aug 5 08:46 .. drwxr-xr-x 2 root root 4.0K Aug 5 09:10 . -rw-r--r-- 1 root root 1.0G Aug 5 09:10 1G.tmpfileIn the other pod,
cnfs-xxx-cpfs-sts-1, confirm that the 1 GB temporary file does not exist.Replace
cnfs-xxx-cpfs-sts-1with the actual pod name.kubectl exec cnfs-xxx-cpfs-sts-1 -- sh -c "ls -alrth /data"The following output is returned. It shows that the 1 GB temporary file does not exist in the pod named
cnfs-xxx-cpfs-sts-1. This indicates that the volumes used by the two pods in the StatefulSet are isolated from each other.total 4.5K drwxr-xr-x 2 root root 4.0K Aug 5 08:46 . drwxr-xr-x 1 root root 4.0K Aug 5 08:46 ..