You can mount CPFS volumes to Kubernetes clusters in Container Service for Kubernetes. This topic describes how to install the Flexvolume plug-in and use CPFS volumes in pods.

Prerequisites

Background information

Cloud Paralleled File System (CPFS) is a parallel file system. CPFS stores data across multiple data nodes in a cluster and allows data to be simultaneously accessed by multiple clients. Therefore, it can provide data storage services with high IOPS, high throughput, and low latency for large-scale, high-performance computing clusters. As a shared storage service, CPFS meets the resource sharing and high performance requirements of scenarios such as big data, AI, and genetic computing. For more information about CPFS, see Introduction to CPFS.

Step 1: Install drivers

To use CPFS in Container Service for Kubernetes, you need to install the following two drivers.
  • CPFS container driver: The Flexvolume-cpfs plug-in, which is compatible with all CentOS versions. You can deploy Flexvolume-cpfs to install the CPFS container driver.
  • CPFS client driver: The driver that mounts CPFS to a client. This driver has strong dependence on the operating system kernel of the node. You can install the CPFS client driver through the following methods:
    • Manually install the driver. For more information, see Mount a file system.
    • The CPFS client driver is automatically installed when you deploy Flexvolume-cpfs. The driver does not support all operating system kernels.
      You can run the uname -a command on a node to query the kernel version of its operating system. Currently, you can install the CPFS client driver in the following kernel versions.
      3.10.0-957.5.1
      3.10.0-957.21.3
      3.10.0-1062.9.1
    Note
    • Currently, Flexvolume only supports installing the CPFS client driver. It does not support upgrading the client driver.
    • When you upgrade Flexvolume, only Flexvolume-cpfs is upgraded. The CPFS client driver is not upgraded.
    • When you install Flexvolume-cpfs on nodes where cpfs-client and lustre are already deployed, the CPFS client driver will not be installed again.
    • You can only manually upgrade the CPFS client driver. For more information, see Mount a file system.
  1. Deploy a YAML template on a node.
    1. Connect to Kubernetes clusters through kubectl from a client computer.
    2. Create a flexvolume-cpfs.yaml file.
    3. Copy the following contents to the file.
      apiVersion: apps/v1
      kind: DaemonSet
      metadata:
        name: flexvolume-cpfs
        namespace: kube-system
        labels:
          k8s-volume: flexvolume-cpfs
      spec:
        selector:
          matchLabels:
            name: flexvolume-cpfs
        template:
          metadata:
            labels:
              name: flexvolume-cpfs
          spec:
            hostPID: true
            hostNetwork: true
            tolerations:
            - operator: "Exists"
            priorityClassName: system-node-critical
            affinity:
              nodeAffinity:
                requiredDuringSchedulingIgnoredDuringExecution:
                  nodeSelectorTerms:
                  - matchExpressions:
                    - key: type
                      operator: NotIn
                      values:
                      - virtual-kubelet
            containers:
            - name: flexvolume-cpfs
              image: registry.cn-hangzhou.aliyuncs.com/acs/flexvolume:v1.14.8.96-0d85fd1-aliyun
              imagePullPolicy: Always
              securityContext:
                privileged: true
              env:
              - name: ACS_CPFS
                value: "true"
              - name: FIX_ISSUES
                value: "false"
              livenessProbe:
                exec:
                  command:
                  - sh
                  - -c
                  - ls /acs/flexvolume
                failureThreshold: 8
                initialDelaySeconds: 15
                periodSeconds: 60
                successThreshold: 1
                timeoutSeconds: 15
              volumeMounts:
              - name: usrdir
                mountPath: /host/usr/
              - name: etcdir
                mountPath: /host/etc/
              - name: logdir
                mountPath: /var/log/alicloud/
              - mountPath: /var/lib/kubelet
                mountPropagation: Bidirectional
                name: kubeletdir
            volumes:
            - name: usrdir
              hostPath:
                path: /usr/
            - name: etcdir
              hostPath:
                path: /etc/
            - name: logdir
              hostPath:
                path: /var/log/alicloud/
            - hostPath:
                path: /var/lib/kubelet
                type: Directory
              name: kubeletdir
        updateStrategy:
          rollingUpdate:
            maxUnavailable: 10%
          type: RollingUpdate
    4. Run the following command to deploy the YAML file on a cluster node.
      kubectl create -f flexvolume-cpfs.yaml
  2. Check the deployment result.
    1. Run the following command to query the plug-in status.
      # kubectl get pod -nkube-system | grep flex
      flexvolume-97psk                                  1/1     Running   0          27m
      flexvolume-cpfs-dgxfq                             1/1     Running   0          98s
      flexvolume-cpfs-qpbcb                             1/1     Running   0          98s
      flexvolume-cpfs-vlrf9                             1/1     Running   0          98s
      flexvolume-cpfs-wklls                             1/1     Running   0          98s
      flexvolume-cpfs-xtl9b                             1/1     Running   0          98s
      flexvolume-j8zjr                                  1/1     Running   0          27m
      flexvolume-pcg4l                                  1/1     Running   0          27m
      flexvolume-tjxxn                                  1/1     Running   0          27m
      flexvolume-x7ljw                                  1/1     Running   0          27m
      Note The Flexvolume-cpfs plug-in is deployed on pods whose names are prefixed with flexvolume-cpfs. Pods whose names do not contain CPFS are deployed with the Flexvolume plug-in, which enables mounting disks, NAS file systems, or OSS buckets. The two plug-ins can be deployed on the same node.
    2. Run the following command to see whether the CPFS client driver is installed.
      # rpm -qa | grep cpfs
      kmod-cpfs-client-2.10.8-202.el7.x86_64
      cpfs-client-2.10.8-202.el7.x86_64
    3. Run the following command to see whether mount.lustre is installed.
      # which mount.lustre
      /usr/sbin/mount.lustre

Step 2: Use a CPFS volume

To use a CPFS volume in Container Service for Kubernetes, you need to create a CPFS file system and mount target. For more information, see Create a file system.
Notice When you create a CPFS mount target, select the same VPC network as where the Kubernetes cluster is deployed.

In the following example, the mount target is cpfs-*-alup.cn-shenzhen.cpfs.nas.aliyuncs.com@tcp:cpfs--ws5v.cn-shenzhen.cpfs.nas.aliyuncs.com@tcp .The ID of the CPFS file system is 0237ef41 .

  1. Create a PV.
    1. Create a pv-cpfs.yaml file.
    2. Copy the following contents to the file.
      apiVersion: v1
      kind: PersistentVolume
      metadata:
        name: pv-cpfs
        labels:
          alicloud-pvname: pv-cpfs
      spec:
        capacity:
          storage: 5Gi
        accessModes:
          - ReadWriteMany
        flexVolume:
          driver: "alicloud/cpfs"
          options:
            server: "cpfs-****-alup.cn-shenzhen.cpfs.nas.aliyuncs.com@tcp:cpfs-***-ws5v.cn-shenzhen.cpfs.nas.aliyuncs.com@tcp"
            fileSystem: "0237ef41"
            subPath: "/k8s"
            options: "ro"
      Parameter Description
      server Set the value to the CPFS mount target.
      fileSystem Set the value to the CPFS file system ID.
      subPath Set the value to the CPFS subdirectory that you want to mount to the cluster.
      options Optional. Other mounting parameters.
    3. Run the following command to create a PV:
      kubectl create -f pv-cpfs.yaml
  2. Create a PVC.
    1. Create a pvc-cpfs file with the following contents.
      kind: PersistentVolumeClaim
      apiVersion: v1
      metadata:
        name: pvc-cpfs
      spec:
        accessModes:
          - ReadWriteMany
        resources:
          requests:
            storage: 5Gi
        selector:
          matchLabels:
            alicloud-pvname: pv-cpfs
    2. Run the kubectl create -f pvc-cpfs command to create a PVC.
  3. Create a deployment.
    1. Create a nas-cpfs file with the following contents.
      apiVersion: apps/v1
      kind: Deployment
      metadata:
        name: nas-cpfs
        labels:
          app: nginx
      spec:
        replicas: 1
        selector:
          matchLabels:
            app: nginx
        template:
          metadata:
            labels:
              app: nginx
          spec:
            containers:
            - name: nginx
              image: nginx
              ports:
              - containerPort: 80
              volumeMounts:
                - name: pvc-cpfs
                  mountPath: "/data"
            volumes:
              - name: pvc-cpfs
                persistentVolumeClaim:
                  claimName: pvc-cpfs
    2. Run the kubectl create -f nas-cpfs command to create a deployment.

Result

Run the following commands to query the mounted directories on pods.
# kubectl get pod
NAME                        READY   STATUS    RESTARTS   AGE
nas-cpfs-79964997f5-kzrtp   1/1     Running   0          45s

Query the mounted directory on a pod.
# kubectl exec -ti nas-cpfs-79964997f5-kzrtp sh
# mount | grep k8s
192.168.1.12@tcp:192.168.1.10@tcp:/0237ef41/k8s on /data type lustre (ro,lazystatfs)
Query the mounted directory on the node that hosts the pod.
# mount | grep cpfs
192.168.1.12@tcp:192.168.1.10@tcp:/0237ef41/k8s on /var/lib/kubelet/pods/c4684de2-26ce-11ea-abbd-00163e12e203/volumes/alicloud~cpfs/pv-cpfs type lustre (ro,lazystatfs)