You can mount Cloud Paralleled File System (CPFS) volumes to Container Service for
Kubernetes (ACK) clusters. This topic describes how to install the FlexVolume plug-in
and use CPFS volumes in pods.
Background information
CPFS is a parallel file system. CPFS stores data across multiple data nodes in a cluster
and allows data to be simultaneously accessed by multiple clients. Therefore, CPFS
can provide data storage services with high input/output operations per second (IOPS),
high throughput, and low latency for large-sized, high-performance computing clusters.
As a shared storage service, CPFS meets the requirements on resource sharing and high
performance and applies to scenarios such as big data, artificial intelligence (AI),
and genetic computing. For more information, see What is CPFS?
Step 1: Install drivers
To use CPFS in an ACK cluster, you must install the following drivers:
- CPFS container driver: the flexvolume-cpfs plug-in that is compatible with all CentOS
versions. You can deploy flexvolume-cpfs to install the CPFS container driver.
- CPFS client driver: the driver that mounts CPFS to a client. This driver is dependent
on the operating system kernel of the node. You can install the CPFS client driver
by using the following methods:
- Manually install the driver. For more information, see Mount a file system.
- When you deploy flexvolume-cpfs, the CPFS client driver is automatically installed.
The driver does not support all operating system kernels.
You can run the
uname -a
command on a node to query the kernel version of the operating system that runs on
the node. You can install the CPFS client driver on the node that uses one of the
following kernel versions:
3.10.0-957.5.1
3.10.0-957.21.3
3.10.0-1062.9.1
Note
- FlexVolume allows you to only install the CPFS client driver on the node. After the
installation, the CPFS client driver cannot be upgraded and each node can have only
one CPFS client driver.
- When you upgrade FlexVolume, only flexvolume-cpfs is upgraded. The CPFS client driver
is not upgraded.
- When you install flexvolume-cpfs on nodes where cpfs-client and lustre are deployed,
the CPFS client driver will not be installed again.
- You can only manually upgrade the CPFS client driver. For more information, see Mount a file system.
- Deploy a YAML template on a node.
- Use kubectl to connect to an ACK cluster from a client computer.
- Create a flexvolume-cpfs.yaml file.
- Copy the following content to the file.
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: flexvolume-cpfs
namespace: kube-system
labels:
k8s-volume: flexvolume-cpfs
spec:
selector:
matchLabels:
name: flexvolume-cpfs
template:
metadata:
labels:
name: flexvolume-cpfs
spec:
hostPID: true
hostNetwork: true
tolerations:
- operator: "Exists"
priorityClassName: system-node-critical
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: type
operator: NotIn
values:
- virtual-kubelet
containers:
- name: flexvolume-cpfs
image: registry.cn-hangzhou.aliyuncs.com/acs/flexvolume:v1.14.8.96-0d85fd1-aliyun
imagePullPolicy: Always
securityContext:
privileged: true
env:
- name: ACS_CPFS
value: "true"
- name: FIX_ISSUES
value: "false"
livenessProbe:
exec:
command:
- sh
- -c
- ls /acs/flexvolume
failureThreshold: 8
initialDelaySeconds: 15
periodSeconds: 60
successThreshold: 1
timeoutSeconds: 15
volumeMounts:
- name: usrdir
mountPath: /host/usr/
- name: etcdir
mountPath: /host/etc/
- name: logdir
mountPath: /var/log/alicloud/
- mountPath: /var/lib/kubelet
mountPropagation: Bidirectional
name: kubeletdir
volumes:
- name: usrdir
hostPath:
path: /usr/
- name: etcdir
hostPath:
path: /etc/
- name: logdir
hostPath:
path: /var/log/alicloud/
- hostPath:
path: /var/lib/kubelet
type: Directory
name: kubeletdir
updateStrategy:
rollingUpdate:
maxUnavailable: 10%
type: RollingUpdate
- Run the following command to deploy the YAML file on a node:
kubectl create -f flexvolume-cpfs.yaml
- Check the deployment result.
- Run the following command to query the plug-in status:
kubectl get pod -nkube-system | grep flex
The following output is returned:
flexvolume-97psk 1/1 Running 0 27m
flexvolume-cpfs-dgxfq 1/1 Running 0 98s
flexvolume-cpfs-qpbcb 1/1 Running 0 98s
flexvolume-cpfs-vlrf9 1/1 Running 0 98s
flexvolume-cpfs-wklls 1/1 Running 0 98s
flexvolume-cpfs-xtl9b 1/1 Running 0 98s
flexvolume-j8zjr 1/1 Running 0 27m
flexvolume-pcg4l 1/1 Running 0 27m
flexvolume-tjxxn 1/1 Running 0 27m
flexvolume-x7ljw 1/1 Running 0 27m
Note The flexvolume-cpfs plug-in is deployed on pods whose names are prefixed with flexvolume-cpfs.
Pods whose names do not contain cpfs are deployed with the FlexVolume plug-in. Cloud
disks, NAS file systems, or OSS buckets can be mounted to the FlexVolume pods. Both
types of plug-ins can be deployed on the same node.
- Run the following command to check whether the CPFS client driver is installed:
rpm -qa | grep cpfs
The following output is returned:
kmod-cpfs-client-2.10.8-202.el7.x86_64
cpfs-client-2.10.8-202.el7.x86_64
- Run the following command to check whether mount.lustre is installed:
which mount.lustre
The following output is returned:
/usr/sbin/mount.lustre
Step 2: Use a CPFS volume
To use a CPFS volume in ACK, you must create a CPFS file system and a mount target
in the CPFS console. For more information, see
Create a file system.
Notice When you create a CPFS mount target, select a virtual private cloud (VPC). The ACK
cluster is deployed in the same VPC.
In the following example, the mount target is cpfs-*-alup.cn-shenzhen.cpfs.nas.aliyuncs.com@tcp:cpfs--ws5v.cn-shenzhen.cpfs.nas.aliyuncs.com@tcp. The file system ID is 0237ef41.
- Create a PV.
- Create a pv-cpfs.yaml file.
- Copy the following content to the file.
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-cpfs
labels:
alicloud-pvname: pv-cpfs
spec:
capacity:
storage: 5Gi
accessModes:
- ReadWriteMany
flexVolume:
driver: "alicloud/cpfs"
options:
server: "cpfs-****-alup.cn-shenzhen.cpfs.nas.aliyuncs.com@tcp:cpfs-***-ws5v.cn-shenzhen.cpfs.nas.aliyuncs.com@tcp"
fileSystem: "0237ef41"
subPath: "/k8s"
options: "ro"
Parameter |
Description |
server |
Set the value to the CPFS mount target. |
fileSystem |
Set the value to the CPFS file system ID. |
subPath |
Set the value to the CPFS subdirectory that you want to mount to the cluster. |
options |
Other mounting parameters. This parameter is optional. |
- Run the following command to create a PV:
kubectl create -f pv-cpfs.yaml
- Create a PVC.
- Create a pvc-cpfs file and copy the following content to the file.
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: pvc-cpfs
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 5Gi
selector:
matchLabels:
alicloud-pvname: pv-cpfs
- Run the
kubectl create -f pvc-cpfs
command to create a PVC.
- Create a deployment.
- Create a nas-cpfs file and copy the following content to the file.
apiVersion: apps/v1
kind: Deployment
metadata:
name: nas-cpfs
labels:
app: nginx
spec:
replicas: 1
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
volumeMounts:
- name: pvc-cpfs
mountPath: "/data"
volumes:
- name: pvc-cpfs
persistentVolumeClaim:
claimName: pvc-cpfs
- Run the
kubectl create -f nas-cpfs
command to create a deployment.
Result
Run the following command to query the mounting status of the pods that run on the
node:
kubectl get pod
The following output is returned:
NAME READY STATUS RESTARTS AGE
nas-cpfs-79964997f5-kzrtp 1/1 Running 0 45s
Run the following command to query the directory that is mounted to pods on the node.
kubectl exec -ti nas-cpfs-79964997f5-kzrtp sh
mount | grep k8s
The following output is returned:
192.168.1.12@tcp:192.168.1.10@tcp:/0237ef41/k8s on /data type lustre (ro,lazystatfs)
Query the mounted directories on the node.
mount | grep cpfs
The following output is returned:
192.168.1.12@tcp:192.168.1.10@tcp:/0237ef41/k8s on /var/lib/kubelet/pods/c4684de2-26ce-11ea-abbd-00163e12e203/volumes/alicloud~cpfs/pv-cpfs type lustre (ro,lazystatfs)