This topic explains how to mount a statically provisioned Cloud Parallel File System (CPFS) General-purpose Edition volume to a workload in an ACK cluster. CPFS is designed for high-performance computing (HPC) workloads such as AI training, autonomous driving, gene computing, and video rendering.
Prerequisites
Before you begin, make sure you have:
-
csi-plugin and csi-provisioner at version
v1.22.11-abbb810e-aliyunor later. To upgrade, see Upgrade CSI components.
Limits
| Limit | Details |
|---|---|
| Region availability | CPFS General-purpose Edition is available only in specific regions. See Available regions. |
| Mount protocol | Only NFS is supported. The POSIX protocol is not supported. |
| Node architecture | x86 only. ContainerOS nodes are not supported. |
| Network | Volumes can only be mounted to clusters in the same VPC. Cross-VPC mounting is not supported. |
| Provisioning | Static provisioning only. Dynamic provisioning is not supported. |
Configure storage components
The configuration method depends on your csi-plugin version.
csi-plugin 1.33 or later
Install the cnfs-nas-daemon component and enable the AlinasMountProxy=true FeatureGate on csi-plugin. This lets the CSI component delegate mounting to cnfs-nas-daemon. For details, see Manage the cnfs-nas-daemon component.
csi-plugin earlier than 1.33
-
Apply a ConfigMap to enable NFS mounting for CPFS General-purpose Edition.
cat << EOF | kubectl apply -f - apiVersion: v1 kind: ConfigMap metadata: name: csi-plugin namespace: kube-system data: cpfs-nas-enable: "true" EOF -
Restart csi-plugin to install the required dependencies. This does not affect running services.
kubectl -n kube-system rollout restart daemonset csi-pluginExpected output:
daemonset.apps/csi-plugin restarted
Step 1: Create a CPFS file system and protocol service
Before creating a persistent volume (PV), you need a CPFS file system, a protocol service, and the mount address from that protocol service.
-
Create a CPFS General-purpose Edition file system.
-
Create the file system in the same region as your cluster.
-
If you have an existing file system, open the NAS console, go to File System List, click the file system, and check its version on the Basic Information page. The version must be 2.3.0 or later. If it is earlier, create a new file system.
-
-
-
Use the same VPC and vSwitch as your cluster.
-
If you have an existing protocol service, verify it is in the same VPC as the cluster. If not, create a new one.
-
-
Get the mount address of the protocol service. On the Protocol Service page, click Export Directory. In the Mount Address column, copy the address. The address combines the mount target domain name and the exported directory path, for example:
cpfs-****.<region-id>.cpfs.aliyuncs.com:/share.
Step 2: Create a PV and a PVC
A persistent volume (PV) represents the CPFS storage you provisioned. A persistent volume claim (PVC) binds to that PV so pods can use it.
-
Create the PV and PVC.
-
Save the following content as
cpfs-pv-pvc.yaml.apiVersion: v1 kind: PersistentVolume metadata: name: cpfs-pv labels: alicloud-pvname: cpfs-pv # Label used by the PVC selector to bind to this PV spec: accessModes: - ReadWriteMany # Allows multiple nodes to read and write simultaneously capacity: storage: 20Gi csi: driver: nasplugin.csi.alibabacloud.com volumeAttributes: mountProtocol: cpfs-nfs # Mount over NFS protocol path: "/share" # Exported directory path; can be a subdirectory, e.g. /share/dir volumeAs: subpath # Creates a subdirectory-type PV server: "cpfs-******-******.cn-shanghai.cpfs.aliyuncs.com" # Domain name from the mount address volumeHandle: cpfs-pv # Must match the PV name above mountOptions: - rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2,noresvport - vers=3 --- apiVersion: v1 kind: PersistentVolumeClaim metadata: name: cpfs-pvc spec: accessModes: - ReadWriteMany resources: requests: storage: 20Gi # Must not exceed the PV capacity selector: matchLabels: alicloud-pvname: cpfs-pv # Binds to the PV with this labelPV parameters
Parameter Description Required Default labelsLabel that the PVC selector uses to match and bind to this PV.
Yes
—
accessModesAccess mode for the PV.
Yes
—
capacity.storageStorage capacity of the volume.
Yes
—
csi.driverCSI driver type. Set to
nasplugin.csi.alibabacloud.com.Yes
—
csi.volumeAttributes.mountProtocolMount protocol. Set to
cpfs-nfsto mount over NFS.Yes
—
csi.volumeAttributes.pathExported directory of the CPFS protocol service, such as
/share. Can also be a subdirectory, such as/share/dir.Yes
—
csi.volumeAttributes.volumeAsPV type. Set to
subpathfor a subdirectory-type PV.Yes
—
csi.volumeAttributes.serverDomain name from the mount address of the CPFS protocol service.
Yes
—
csi.volumeHandleUnique identifier for the volume. Must match the PV name.
Yes
—
PVC parameters
Parameter Description Required Default accessModesAccess mode that the PVC requests from the PV.
Yes
—
selector.matchLabelsLabel selector used to bind to a specific PV.
Yes
—
resources.requests.storageStorage capacity requested by the pod. Must not exceed the PV capacity.
Yes
—
-
Apply the manifest.
kubectl apply -f cpfs-pv-pvc.yaml
-
-
Verify that the PVC is bound to the PV.
kubectl get pvc cpfs-pvcThe output should show
STATUS: Bound:NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS VOLUMEATTRIBUTESCLASS AGE cpfs-pvc Bound cpfs-pv 20Gi RWO <unset> 18m
Step 3: Deploy an application and verify the mount
-
Create a StatefulSet that uses the CPFS volume.
-
Save the following content as
cpfs-test.yaml.apiVersion: apps/v1 kind: StatefulSet metadata: name: cpfs-sts spec: selector: matchLabels: app: nginx serviceName: "nginx" replicas: 1 template: metadata: labels: app: nginx spec: containers: - name: nginx image: anolis-registry.cn-zhangjiakou.cr.aliyuncs.com/openanolis/nginx:1.14.1-8.6 volumeMounts: - name: cpfs-pvc mountPath: /data volumes: - name: cpfs-pvc persistentVolumeClaim: claimName: cpfs-pvc -
Apply the manifest.
kubectl apply -f cpfs-test.yaml
-
-
Verify that the CPFS volume is mounted.
kubectl exec cpfs-sts-0 -- mount | grep /dataA successful mount shows an NFS entry on
/datawithvers=3:cpfs-******-******.cn-shanghai.cpfs.aliyuncs.com:/share on /data type nfs (rw,relatime,vers=3,rsize=1048576,wsize=1048576,namlen=255,hard,nolock,noresvport,proto=tcp,port=30000,timeo=600,retrans=2,sec=sys,mountaddr=127.0.1.255,mountvers=3,mountport=30000,mountproto=tcp,local_lock=all,addr=127.0.1.255)