All Products
Search
Document Center

Cloud Parallel File Storage:Mount a CPFS for Lingjun file system in ACS

Last Updated:Jan 13, 2026

Learn how to use a statically provisioned CPFS for Lingjun file system in Container Compute Service (ACS) and verify its shared and persistent storage capabilities.

Background information

ACS uses the Kubernetes Container Storage Interface (CSI) to integrate with Alibaba Cloud storage like Cloud Disk, NAS, and OSS, and to support native Kubernetes volumes such as EmptyDir and ConfigMap.

For high-throughput, low-latency scenarios like AIGC and autonomous driving, CPFS for Lingjun is the recommended Persistent Volume (PV) solution. It is an advanced, serverless storage system engineered for AI workloads, achieving high performance through a distributed parallel architecture, a proprietary RoCE RDMA network protocol, and multi-level caching.

Prerequisites

  • You have created a CPFS for Lingjun file system. For details, see Create a file system.

  • Verify the VPC mount target requirement for your specific scenario:

    • For mounting on CPU pods (via VPC), a VPC mount target must be created for the file system.

    • For mounting on GPU pods (via RDMA), a VPC mount target is not required.

  • The managed-csiprovisioner component is installed in the ACS cluster.

    Note

    Go to the ACS cluster management page in the ACS console. In the left-side navigation pane of the cluster management page, choose Operations > Add-ons. On the Storage tab, you can check whether managed-csiprovisioner is installed.

Usage notes

  • CPFS provides shared storage. A single CPFS volume can be mounted to multiple pods.

  • CPFS for Lingjun supports two network access methods. Choose one based on your pod type:

    • VPC network (standard): This method is highly compatible and supports mounting to any type of CPU or GPU pod in ACS.

    • RDMA network (high-performance): This method provides maximum storage throughput and low latency. It is only supported for mounting to specific GPU models in ACS pods. For a list of GPU models that support the RDMA protocol, see Supported GPU cards in ACS.

  • When mounting a CPFS for Lingjun file system to be used by a Lingjun GPU, ensure that the zone and cluster ID of the CPFS for Lingjun file system are the same as those of the Lingjun GPU.

Mount a statically provisioned CPFS volume

Step 1: Create a PV and a PVC

kubectl

  1. Save this YAML content as cpfs-pv-pvc.yaml.

    Choose the YAML that corresponds to the compute type of the pod you are mounting to.

    Mount to a GPU pod

    Important

    This method is only supported for mounting to pods with specific GPU models. For details, see Supported GPU models in ACS.

    apiVersion: v1
    kind: PersistentVolume
    metadata:
      name: cpfs-test
      labels:
        alicloud-pvname: cpfs-test
    spec:
      accessModes:
      - ReadWriteMany
      capacity:
        storage: 10Ti
      csi:
        driver: povplugin.csi.alibabacloud.com
        volumeAttributes:
          filesystemId: bmcpfs-*****
          path: /
        volumeHandle: bmcpfs-*****
    ---
    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
      name: cpfs-test
    spec:
      accessModes:
      - ReadWriteMany
      selector:
        matchLabels:
          alicloud-pvname: cpfs-test
      resources:
        requests:
          storage: 10Ti
    • PV parameters

      Parameter

      Description

      labels

      A label for the PV. The PVC uses this label in its selector to bind to the PV.

      accessModes

      The access mode of the PV.

      capacity.storage

      The capacity of the volume.

      csi.driver

      The driver type. Set it to povplugin.csi.alibabacloud.com.

      csi.volumeAttributes

      The attributes of the CPFS volume.

      • filesystemId: The ID of the CPFS for Lingjun file system.

      • path: The subdirectory to mount. The default is /, which mounts the root directory of the CPFS file system. You can specify a subdirectory, such as /dir. If the specified subdirectory does not exist, it is created automatically during mounting.

      csi.volumeHandle

      The ID of the CPFS for Lingjun file system.

    • PVC parameters

      Parameter

      Description

      accessModes

      The access mode requested by the PVC.

      selector

      Binds the PVC to a PV with a matching label.

      resources.requests.storage

      The storage capacity allocated to the pod. This value must not exceed the PV capacity.

    Mount to a CPU pod

    apiVersion: v1
    kind: PersistentVolume
    metadata:
      name: cpfs-test
      labels:
        alicloud-pvname: cpfs-test
    spec:
      accessModes:
      - ReadWriteMany
      capacity:
        storage: 10Ti
      csi:
        driver: nasplugin.csi.alibabacloud.com
        volumeAttributes:
          mountProtocol: efc
          server: cpfs-***-vpc-***.cn-wulanchabu.cpfs.aliyuncs.com
          path: /
        volumeHandle: bmcpfs-*****
    ---
    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
      name: cpfs-test
    spec:
      accessModes:
      - ReadWriteMany
      selector:
        matchLabels:
          alicloud-pvname: cpfs-test
      resources:
        requests:
          storage: 10Ti
    • PV parameters

      Parameter

      Description

      labels

      A label for the PV. The PVC uses this label in its selector to bind to the PV.

      accessModes

      The access mode of the PV.

      capacity.storage

      The capacity of the volume.

      csi.driver

      The driver type. Set it to nasplugin.csi.alibabacloud.com.

      csi.volumeAttributes

      The attributes of the CPFS volume.

      • mountProtocol: The mount protocol. Set it to efc.

      • server: The domain name of the VPC mount target for the CPFS file system.

      • path: The subdirectory to mount. The default is /, which mounts the root directory of the CPFS file system. You can specify a subdirectory, such as /dir.

      csi.volumeHandle

      The ID of the CPFS for Lingjun file system.

    • PVC parameters

      Parameter

      Description

      accessModes

      The access mode requested by the PVC.

      selector

      Binds the PVC to a PV with a matching label.

      resources.requests.storage

      The storage capacity allocated to the pod. This value must not exceed the PV capacity.

  2. Create the PV and PVC.

    kubectl create -f cpfs-pv-pvc.yaml
  3. Verify that the PVC is bound to the PV.

    kubectl get pvc cpfs-test

    Example output:

    NAME        STATUS   VOLUME           CAPACITY   ACCESS MODES   STORAGECLASS    VOLUMEATTRIBUTESCLASS   AGE
    cpfs-test   Bound    cpfs-test        10Ti       RWX            <unset>         <unset>                 10s

Console

  1. Log on to the ACS console.

  2. On the Clusters, click the name of the cluster to go to the cluster management page.

  3. In the left-side navigation pane of the cluster management page, choose Volumes > Persistent Volume Claims.

  4. On the Persistent Volume Claims page, click Create.

  5. In the dialog box that appears, configure the parameters and then click Create.

    This table describes the parameters for creating a PV and PVC simultaneously. You can also create them separately.

    Note

    The ACS console wizard currently does not support creating a CPFS for Lingjun PVC for use with CPU pods. Use the kubectl method for this scenario.

    Parameter

    Description

    Example

    PVC Type

    Select CPFS.

    CPFS

    Name

    Enter a name for the PVC. Follow the format requirements displayed on the screen.

    cpfs-test

    Allocation Mode

    Select Existing Volume or Create Volume.

    Create Volume

    CPFS Type

    Select CPFS for Lingjun.

    CPFS for Lingjun

    Access Mode

    Supported modes are ReadWriteMany and ReadWriteOnce.

    ReadWriteMany

    File System ID

    Enter the ID of the CPFS for Lingjun file system that you want to mount.

    bmcpfs-0115******13q5

  6. Check the created PVC and PV.

    On the Persistent Volume Claims and Persistent Volumes pages, find the newly created PVC and PV and verify that they are bound.

Step 2: Create an application and mount the CPFS volume

kubectl

  1. Create a file named cpfs-test.yaml using the following YAML content.

    GPU-accelerated application

    The following YAML example creates a Deployment that consists of two pods. Both pods declare that they want to use GPU computing power through the alibabacloud.com/compute-class: gpu label, and request storage resources through a PVC named cpfs-test. The mount path for both is /data.

    Note

    For information about specific GPU models, see Specify GPU models and driver versions for ACS GPU-accelerated pods.

    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: cpfs-test
      labels:
        app: cpfs-test
    spec:
      replicas: 2
      selector:
        matchLabels:
          app: cpfs-test
      template:
        metadata:
          labels:
            app: cpfs-test
            # Set the compute class to GPU
            alibabacloud.com/compute-class: gpu
            # Specify the GPU model, fill in as needed, such as T4
            alibabacloud.com/gpu-model-series: T4
            alibabacloud.com/compute-qos: default
        spec:
          containers:
          - name: nginx
            image: registry.cn-hangzhou.aliyuncs.com/acs-sample/nginx:latest
            ports:
            - containerPort: 80
            volumeMounts:
              - name: pvc-cpfs
                mountPath: /data
          volumes:
            - name: pvc-cpfs
              persistentVolumeClaim:
                claimName: cpfs-test
    CPU-accelerated application

    The following YAML example creates a Deployment that consists of two pods. It requests storage resources through a PVC named cpfs-test. The mount path for both is /data.

    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: cpfs-test
      labels:
        app: cpfs-test
    spec:
      replicas: 2
      selector:
        matchLabels:
          app: cpfs-test
      template:
        metadata:
          labels:
            app: cpfs-test
        spec:
          containers:
          - name: nginx
            image: registry.cn-hangzhou.aliyuncs.com/acs-sample/nginx:latest
            ports:
            - containerPort: 80
            volumeMounts:
              - name: pvc-cpfs
                mountPath: /data
          volumes:
            - name: pvc-cpfs
              persistentVolumeClaim:
                claimName: cpfs-test
  2. Create the Deployment and mount the CPFS volume.

    kubectl create -f cpfs-test.yaml
  3. View the deployment status of the pods in the Deployment.

    kubectl get pod | grep cpfs-test

    Expected output shows that two pods have been created:

    cpfs-test-****-***a   1/1     Running   0          45s
    cpfs-test-****-***b   1/1     Running   0          45s
  4. View the mount path.

    The example command below is expected to return the data in the mounted directory of the CPFS for LINGJUN file system. By default, it is empty.

    kubectl exec cpfs-test-****-***a -- ls /data

Console

  1. In the left-side navigation pane of the cluster management page, choose Workloads > Deployments.

  2. On the Deployments page, click Create from Image.

  3. Configure the Deployment parameters and click Create.

    Take note of the following parameters. Keep the default settings for other parameters. For more information, see Create a stateless application using a Deployment.

    GPU-accelerated application

    Configuration page

    Parameter

    Description

    Example

    Basic Information

    Name:

    The name of the Deployment. Enter a custom name. The name must follow the format requirements displayed on the interface.

    cpfs-test

    Replicas:

    Configure the number of replicas for the Deployment.

    2

    Type

    Select the compute type for the pod.

    Note

    For information about specific GPU models, see Specify GPU models and driver versions for ACS GPU-accelerated pods.

    GPU, T4

    Container

    Image Name:

    Enter the image address used to deploy the application.

    registry.cn-hangzhou.aliyuncs.com/acs-sample/nginx:latest

    Required Resources

    Set the required GPU, vCPU, and memory resources.

    • GPU: 1

    • CPU: 2 vCPUs

    • Memory: 2 GiB

    Volume:

    Click Add PVC, and then configure the parameters.

    • Mount Source: Select the PVC you created earlier.

    • Container Path: Enter the container path to which the CPFS file system will be mounted.

    • Mount source: pvc-cpfs

    • Container path: /data

    CPU-accelerated application

    Configuration page

    Parameter

    Description

    Example

    Basic Information

    Application Name

    The name of the Deployment. Enter a custom name. The name must follow the format requirements displayed on the interface.

    cpfs-test

    Replicas

    Configure the number of replicas for the Deployment.

    2

    Type

    Select the compute type for the pod.

    CPU, general-purpose

    Container

    Image Name

    Enter the image address used to deploy the application.

    registry.cn-hangzhou.aliyuncs.com/acs-sample/nginx:latest

    Required Resources

    Set the required vCPU and memory resources.

    • CPU: 0.25 vCPUs

    • Memory: 0.5 GiB

    Volume

    Click Add PVC, and then configure the parameters.

    • Mount Source: Select the PVC you created earlier.

    • Container Path: Enter the container path to which the CPFS file system will be mounted.

    • Mount source: pvc-cpfs

    • Container path: /data

  4. View the application deployment status.

    1. On the Deployments page, click the application name.

    2. On the Pods tab, confirm that the pods are running normally (status is Running).

Verify shared storage and persistent storage

The Deployment created according to the example above contains two pods, both of which mount the same CPFS file system. You can verify this in the following ways:

  • Create a file in one pod, and then view the file in the other pod to verify shared storage.

  • Recreate the Deployment, and then check whether the data in the file system still exists in the newly created pods to verify persistent storage.

  1. View the pod information.

    kubectl get pod | grep cpfs-test

    Expected output:

    cpfs-test-****-***a   1/1     Running   0          45s
    cpfs-test-****-***b   1/1     Running   0          45s
  2. Verify shared storage.

    1. Create a file in one pod.

      Using the pod named cpfs-test-****-***a as an example:

      kubectl exec cpfs-test-****-***a -- touch /data/test.txt
    2. View the file in the other pod.

      Using the pod named cpfs-test-****-***b as an example:

      kubectl exec cpfs-test-****-***b -- ls /data

      The expected output is as follows, showing that the newly created file test.txt is shared:

      test.txt
  3. Verify persistent storage.

    1. Recreate the Deployment.

      kubectl rollout restart deploy cpfs-test
    2. View the pods and wait for the new pods to be created successfully.

      kubectl get pod | grep cpfs-test

      Expected output:

      cpfs-test-****-***c   1/1     Running   0          78s
      cpfs-test-****-***d   1/1     Running   0          52s
    3. Check whether the data in the file system still exists in the new pod.

      Using the pod named cpfs-test-c*** as an example:

      kubectl exec cpfs-test-****-***c -- ls /data

      The expected output is as follows, showing that the data in the CPFS file system still exists and can be retrieved from the mount directory of the new pod:

      test.txt