All Products
Search
Document Center

Container Service for Kubernetes:Use static persistent volumes of CPFS for Lingjun

Last Updated:Jan 22, 2026

CPFS for Lingjun provides high throughput and input/output operations per second (IOPS). It supports end-to-end RDMA network acceleration and is suitable for intelligent computing scenarios, such as AIGC and autonomous driving. ACK lets you attach CPFS for Lingjun file systems to workloads as static persistent volumes (PVs).

Important

CPFS for Lingjun is currently in invitational preview and is supported only in some regions and zones. To use this feature, you can contact your account manager to request access.

Function introduction

Based on the Container Storage Interface (CSI) component, ACK supports attaching static PVs of CPFS for Lingjun to workloads using PVs and PersistentVolumeClaims (PVCs). CSI automatically selects the optimal mount method based on the node type where the pod is scheduled:

  • VSC mount: This mount method is supported only on Lingjun nodes. To enable this feature, submit a ticket to the CPFS and Lingjun product teams to be added to the whitelist.

  • VPC mount: This mount method is supported on non-Lingjun nodes. It is achieved by creating a VPC mount target. All nodes within the same VPC can then mount and access the file system.

Prerequisites

  • You are familiar with the limits of CPFS for Lingjun.

  • The cluster meets the following requirements:

    • Cluster version: 1.26 or later. To upgrade the cluster, see Manually upgrade an ACK cluster.

    • Node operating system: Alibaba Cloud Linux 3.

    • The following storage components are installed and meet the version requirements.

      On the Component Management page of the cluster, you can check component versions and install or upgrade components.
      • CSI components (csi-plugin and csi-provisioner): v1.33.1 or later. For more information about how to upgrade, see Manage CSI components.

      • cnfs-nas-daemon component: 0.1.2 or later.

        Click to view the introduction to cnfs-nas-daemon

        cnfs-nas-daemon manages EFC processes. It has high resource consumption and directly affects storage performance. You can adjust its resource configuration on the Component Management page. The recommended strategy is as follows:

        • CPU: The CPU request is related to the total bandwidth of the node. The calculation rule is to allocate 0.5 cores for every 1 Gb/s of bandwidth and add an extra 1 core for metadata management. You can adjust the CPU configuration based on this rule.

          For example, for a node with a 100 Gb/s network interface controller (NIC), the recommended CPU request is 100 × 0.5 + 1 = 51 cores.
        • Memory: CPFS for Lingjun is accessed through Filesystem in Userspace (FUSE). Its data read/write cache and file metadata both consume memory. You can set the memory request to 15% of the total memory of the node.

        After you adjust the configuration, you can dynamically scale resources based on the actual workload.

        Important
        • How updates take effect: The default update policy for the cnfs-nas-daemon DaemonSet is OnDelete. After you adjust its CPU or Memory on the Component Management page, you must manually delete the original cnfs-nas-daemon pod on the node. This action triggers a rebuild and applies the new configuration.

          To ensure business stability, we recommend that you perform this operation during off-peak hours.

        • Operation risks: Deleting or restarting the cnfs-nas-daemon pod temporarily interrupts the CPFS mount service on the node.

          • Nodes that do not support hot upgrades for mount targets①: This operation is a hardware interrupt and causes the application pod to run abnormally. You must manually delete the application pod and wait for it to restart to recover.

          • Nodes that support hot upgrades①: The application pod can automatically recover after the cnfs-nas-daemon pod restarts.

          ①: Nodes that meet the following conditions support hot upgrades:

          • The node system kernel is 5.10.134-18 or later.

          • The versions of bmcpfs-csi-controller and bmcpfs-csi-plugin are 1.35.1 or later.

          • The version of cnfs-nas-daemon is 0.1.9-compatible.1 or later.

      • bmcpfs-csi component: This component includes bmcpfs-csi-controller, a control plane component managed by ACK, and bmcpfs-csi-node, a node-side component deployed as a DaemonSet in the cluster.

Notes

  • When you use a VSC mount, the node where the pod runs must be in the same hpn-zone as the CPFS for Lingjun file system instance.

  • During initialization, a Lingjun node must be associated with a CPFS for Lingjun instance. Otherwise, the instance cannot be mounted using CSI.

  • Before you take a faulty Lingjun node offline, you must first drain the pods. Otherwise, the cluster metadata becomes inconsistent, and the pod resources are left behind and cannot be reclaimed.

  • Mounting different subdirectories from the same CPFS instance using multiple PVs in the same pod is not supported. Because of underlying driver limitations, this configuration causes the pod to fail to mount and start.

    You can create only one PV/PVC for the CPFS instance. Then, you can use the volumeMounts configuration with the subPath field in the pod to mount the required subdirectories separately.

    subPath is implemented based on a lightweight bind mount mechanism and does not introduce additional performance overhead.

Step 1: Create a CPFS file system

  1. Create a CPFS for Lingjun file system and record the file system ID. For more information, see Create a CPFS for Lingjun file system.

  2. (Optional) To mount from a non-Lingjun node, create a VPC mount target in the same VPC as the cluster nodes and record the mount target domain name. The domain name uses the format cpfs-***-vpc-***.<Region>.cpfs.aliyuncs.com.

    If the pod is scheduled to a Lingjun node, it uses a VSC mount by default. In this case, this step is not required.

Step 2: Create a PV and a PVC

  1. You can create a PV and a PVC based on the existing CPFS file system.

    1. You can modify the following YAML sample and save it as bmcpfs-pv-pvc.yaml.

      apiVersion: v1
      kind: PersistentVolume
      metadata:
        name: bmcpfs
      spec:
        accessModes:
        - ReadWriteMany
        capacity:
          storage: 10Ti
        claimRef:
          name: bmcpfs
          namespace: default
        csi:
          driver: bmcpfsplugin.csi.alibabacloud.com
          volumeAttributes:
             # This field is required if the pod is scheduled to a non-Lingjun node or if cross-domain automatic VPC switchover is enabled. Otherwise, the mount will fail.
            vpcMountTarget: cpfs-***-vpc-***.<Region>.cpfs.aliyuncs.com
            # If the pod is scheduled to a node in a different zone from the current bmcpfs, the vpcMountTarget mount target is automatically used to access the CPFS storage.
            mountpointAutoSwitch: "true"
          # Replace volumeHandle with the ID of the CPFS for Lingjun file system.
          volumeHandle: bmcpfs-*****
        mountOptions: []
      
      ---
      apiVersion: v1
      kind: PersistentVolumeClaim
      metadata:
        name: bmcpfs
        namespace: default
      spec:
        accessModes:
        - ReadWriteMany
        resources:
          requests:
            storage: 10Ti
        volumeMode: Filesystem
        volumeName: bmcpfs
      • PV parameters

        Parameter

        Description

        accessModes

        The access mode of the PV.

        capacity.storage

        The declared storage capacity of the volume. This is only a declaration and does not affect the actual capacity.

        csi.driver

        The driver type. When mounting CPFS for Lingjun, this is fixed to bmcpfsplugin.csi.alibabacloud.com.

        csi.volumeAttributes.vpcMountTarget

        The domain name of the VPC mount target for CPFS. Leaving this empty causes mounting to fail on non-Lingjun nodes.

        If the pod is scheduled to a Lingjun node, you do not need to set this.

        csi.volumeAttributes.mountpointAutoSwitch

        Specifies whether to allow bmcpfs to automatically switch between the VSC mount target (created and obtained by default) and the VPC mount target (must be specified).

        Used in conjunction with csi.volumeAttributes.vpcMountTarget.

        csi.volumeHandle

        The ID of the CPFS file system.

        mountOptions

        Mount parameters.

      • PVC parameters

        Parameter

        Description

        accessModes

        The access mode requested by the PVC for the PV. It must match the PV.

        resources.requests.storage

        The storage capacity allocated to the pod. It cannot be larger than the PV capacity.

        volumeMode

        The mount mode. It must be set to Filesystem.

        volumeName

        The name of the PV to which the PVC will be bound.

    2. Create the PV and the PVC.

      kubectl apply -f bmcpfs-pv-pvc.yaml
  2. Confirm that the PVC is bound to the PV.

    kubectl get pvc bmcpfs

    Expected output:

    NAME     STATUS   VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS   VOLUMEATTRIBUTESCLASS   AGE
    bmcpfs   Bound    bmcpfs   10Ti       RWX                           <unset>                 51s

    The STATUS is Bound. This indicates that the PV and the PVC are successfully bound.

Step 3: Create an application and mount CPFS

Scenario 1: Mount the entire CPFS file system

In this scenario, the entire CPFS file system is mounted into the container.

  1. You can use the following YAML content to create a file named cpfs-test.yaml to declare the static PV of CPFS for Lingjun.

    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: cpfs-test
      labels:
        app: cpfs-test
    spec:
      replicas: 2
      selector:
        matchLabels:
          app: cpfs-test
      template:
        metadata:
          labels:
            app: cpfs-test
        spec:
          containers:
          - name: nginx
            image: anolis-registry.cn-zhangjiakou.cr.aliyuncs.com/openanolis/nginx:1.14.1-8.6
            ports:
            - containerPort: 80
            volumeMounts:
              - name: pvc-cpfs
                mountPath: /data
          volumes:
            - name: pvc-cpfs
              persistentVolumeClaim:
                claimName: bmcpfs
  2. Create the Deployment.

    kubectl create -f cpfs-test.yaml
  3. Check the pod deployment status.

    kubectl get pod -l app=cpfs-test

    Expected output:

    NAME                         READY   STATUS    RESTARTS   AGE
    cpfs-test-76b77d64b5-2hw96   1/1     Running   0          42s
    cpfs-test-76b77d64b5-dnwdx   1/1     Running   0          42s
  4. Enter any pod to verify that the static PV of CPFS for Lingjun is mounted.

    kubectl exec -it <pod-name> -- mount | grep /data

    The following output indicates that the static PV of CPFS for Lingjun is mounted.

    bindroot-f0a5c-******:cpfs-*******-vpc-****.cn-shanghai.cpfs.aliyuncs.com:/ on /data type fuse.aliyun-alinas-efc (rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=1048576)

Scenario 2: Mount a subdirectory of the CPFS file system

In a shared storage scenario, you can use volumeMounts.subPath to achieve data isolation for multiple tenants or tasks. This allows multiple application pods to share the same CPFS volume while each has its own independent directory.

  1. Create a pod.yaml file with the following content. The pod contains two containers that mount different subdirectories of the same PVC (bmcpfs) using subPath.

    When the pod is mounted, if the subdirectory specified by subPath (for example, workspace/alpha) does not exist in the CPFS file system, it is automatically created.
    apiVersion: v1
    kind: Pod
    metadata:
      name: cpfs-subpath-demo-pod
    spec:
      containers:
        - name: task-alpha-container
          image: busybox:1.35
          command: ["/bin/sh", "-c", "sleep 3600"]
          volumeMounts:
            - name: cpfs-storage
              mountPath: /data/workspace # Mount path inside the container
              subPath: workspace/alpha   # Mount the workspace/alpha subdirectory within the volume, not the entire volume
    
        - name: task-beta-container
          image: busybox:1.35
          command: ["/bin/sh", "-c", "sleep 3600"]
          volumeMounts:
            - name: cpfs-storage
              mountPath: /data/workspace # The mount path inside the container can be the same
              subPath: workspace/beta    # Mount the workspace/beta subdirectory within the volume, not the entire volume
      volumes:
        - name: cpfs-storage
          persistentVolumeClaim:
            claimName: bmcpfs # Reference the previously created PVC
  2. Deploy the pod.

    kubectl apply -f pod.yaml
  3. Verify the mount and write operations for the task-alpha container.

    1. Connect to the task-alpha container.

      kubectl exec -it cpfs-subpath-demo-pod -c task-alpha-container -- /bin/sh
    2. View the mounted file systems to confirm that the CPFS volume exists.

      df -h

      The following output indicates that the shared directory (/share) is mounted to the /data/workspace path inside the container.

      Filesystem                Size      Used Available Use% Mounted on
      ...
      192.XX.XX.0:/share          10.0T     1.0G     10.0T   0% /data/workspace
      ...
    3. Check the parent directory structure of the mount point.

      ls -l /data/

      The following output indicates that a subdirectory named workspace exists in the /data directory.

      total 4
      drwxr-xr-x    2 root     root          4096 Aug 15 10:00 workspace
    4. Create a file in the mounted directory to verify write permissions.

      echo "hello from alpha" > /data/workspace/alpha.log
      exit
  4. Verify the mount and data isolation for the task-beta container.

    1. Connect to the task-beta container.

      kubectl exec -it cpfs-subpath-demo-pod -c task-beta-container -- /bin/sh
    2. Create a file at the mount point /data/workspace in the container.

      echo "hello from beta" > /data/workspace/beta.log
    3. Check the files in the /data/workspace/ directory.

      ls -l /data/workspace/

      The following output indicates that beta.log was written and alpha.log does not exist. This means the data between the two containers is isolated.

      total 4
      -rw-r--r--    1 root     root            16 Aug 15 10:05 beta.log