All Products
Search
Document Center

Container Compute Service:Mount a statically provisioned CPFS volume

Last Updated:Nov 26, 2025

Cloud Parallel File Storage (CPFS) is a fully managed and expandable parallel file system provided by Alibaba Cloud to meet requirements in high-performance computing scenarios. CPFS allows concurrent access from thousands of servers. It provides tens of GB/s of throughput and millions of input/output operations per second at an extremely low latency of sub-milliseconds. This topic describes how to mount a statically provisioned CPFS volume to an application and how to verify that the volume can be used to share and persist data.

Introduction

CPFS for LINGJUN (invitational preview) is suitable for intelligent computing scenarios, such as AIGC and autonomous driving. Before mounting, note the following items:

  • CPFS for LINGJUN supports end-to-end RDMA networks and is currently in invitational preview. Only certain regions and zones support CPFS for LINGJUN.

  • When accessing CPFS for LINGJUN through RDMA networks, the hpn-zone of the pod must be the same as the hpn-zone of CPFS for LINGJUN.

  • CPFS is a shared storage file system. You can mount a CPFS volume to multiple pods.

  • You can mount a CPFS volume to any CPU-accelerated ACS pod. However, only ACS pods with certain GPU models support CPFS volumes. For more information, submit a ticket.

Prerequisites

The managed-csiprovisioner component is installed in the ACS cluster.

Note

Go to the ACS cluster management page in the ACS console. In the left-side navigation pane of the cluster management page, choose Operations > Add-ons. On the Storage tab, you can check whether managed-csiprovisioner is installed.

Create a CPFS file system

CPFS Intelligent Computing Edition

  1. Create a CPFS for LINGJUN file system.

    Record the file system ID.

  2. (Optional) Create a VPC mount target.

    • For pods that do not support the RDMA protocol (such as CPU pods and some GPU pods), you need to create a VPC mount target to access CPFS through the VPC network.

    Use the VPC and vSwitch of the ACS cluster to create a VPC mount target and generate a mount address. Record the mount target domain name in the format of cpfs-***-vpc-***.<Region>.cpfs.aliyuncs.com.

    CPFS-VPC挂载点

Mount CPFS volumes

Step 1: Create a PV and a PVC

kubectl

  1. Save the following YAML content as cpfs-pv-pvc.yaml.

    Select the corresponding YAML based on the compute type of the pod to which you are mounting.

    CPFS for LINGJUN (RDMA network)

    Important

    Only pods with specific GPU models are supported. For information about GPU models that support the RDMA protocol, see GPU models supported by ACS.

    apiVersion: v1
    kind: PersistentVolume
    metadata:
      name: cpfs-test
      labels:
        alicloud-pvname: cpfs-test
    spec:
      accessModes:
      - ReadWriteMany
      capacity:
        storage: 10Ti
      csi:
        driver: povplugin.csi.alibabacloud.com
        volumeAttributes:
          filesystemId: bmcpfs-*****
          path: /
        volumeHandle: bmcpfs-*****
    ---
    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
      name: cpfs-test
    spec:
      accessModes:
      - ReadWriteMany
      selector:
        matchLabels:
          alicloud-pvname: cpfs-test
      resources:
        requests:
          storage: 10Ti
    • PV parameters

      Parameter

      Description

      labels

      Set labels so that the PVC can use selector to match and bind.

      accessModes

      The access mode of the PV.

      capacity.storage

      The declared volume capacity.

      csi.driver

      The driver type, set to povplugin.csi.alibabacloud.com.

      csi.volumeAttributes

      The attributes of the CPFS volume.

      • filesystemId: The CPFS for LINGJUN file system ID.

      • path: The default is /, which indicates the root directory of the CPFS file system. You can also set it to a subdirectory, such as /dir. If the subdirectory does not exist, it will be automatically created when the volume is mounted.

      csi.volumeHandle

      The CPFS for LINGJUN file system ID.

    • PVC parameters

      Parameter

      Description

      accessModes

      The access mode requested by the PVC for the PV.

      selector

      Use the label on the PV to match and bind.

      resources.requests.storage

      The storage capacity allocated to the pod. It should not exceed the PV capacity.

    CPFS for LINGJUN (VPC network)

    apiVersion: v1
    kind: PersistentVolume
    metadata:
      name: cpfs-test
      labels:
        alicloud-pvname: cpfs-test
    spec:
      accessModes:
      - ReadWriteMany
      capacity:
        storage: 10Ti
      csi:
        driver: nasplugin.csi.alibabacloud.com
        volumeAttributes:
          mountProtocol: efc
          server: cpfs-***-vpc-***.cn-wulanchabu.cpfs.aliyuncs.com
          path: /
        volumeHandle: bmcpfs-*****
    ---
    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
      name: cpfs-test
    spec:
      accessModes:
      - ReadWriteMany
      selector:
        matchLabels:
          alicloud-pvname: cpfs-test
      resources:
        requests:
          storage: 10Ti
    • PV parameters

      Parameter

      Description

      labels

      Set labels so that the PVC can use selector to match and bind.

      accessModes

      The access mode.

      capacity.storage

      The declared volume capacity.

      csi.driver

      The driver type, set to povplugin.csi.alibabacloud.com.

      csi.volumeAttributes

      The attributes of the CPFS volume.

      • mountProtocol: The mount protocol, set to efc.

      • server: The VPC mount target domain name of the CPFS file system.

      • path: The default is /, which indicates the root directory of the CPFS file system. You can also set it to a subdirectory, such as /dir.

      csi.volumeHandle

      The CPFS for LINGJUN file system ID.

    • PVC parameters

      Parameter

      Description

      accessModes

      The access mode requested by the PVC for the PV.

      selector

      Use the label on the PV to match and bind.

      resources.requests.storage

      The storage capacity allocated to the pod. It should not exceed the PV capacity.

  2. Create a PV and a PVC.

    kubectl create -f cpfs-pv-pvc.yaml
  3. Confirm that the PVC is bound to the PV.

    kubectl get pvc cpfs-test

    Expected output:

    NAME        STATUS   VOLUME           CAPACITY   ACCESS MODES   STORAGECLASS    VOLUMEATTRIBUTESCLASS   AGE
    cpfs-test   Bound    cpfs-test        10Ti       RWX            <unset>         <unset>                 10s

Console

  1. Log on to the ACS console.

  2. On the Clusters, click the name of the cluster to go to the cluster management page.

  3. In the left-side navigation pane of the cluster management page, choose Volumes > Persistent Volume Claims.

  4. On the Persistent Volume Claims page, click Create.

  5. In the dialog box that appears, configure the parameters and click Create.

    The following parameters are configured to create the PVC and PV at the same time. You can also create the PV first and then create the PVC.

    Note

    When using the console, mounting CPFS for LINGJUN to CPU applications through VPC mount targets is not currently supported.

    Parameter

    Description

    Example

    PVC Type

    Select CPFS.

    CPFS

    Name

    Enter a custom name for the PVC. The name must follow the format requirements displayed on the interface.

    cpfs-test

    Allocation Mode

    Select Existing Volumes or Create Volume as needed.

    Create Volume

    CPFS Type

    Select CPFS for LINGJUN.

    CPFS for LINGJUN

    Access Mode

    Supports ReadWriteMany and ReadWriteOnce.

    ReadWriteMany

    File System ID:

    specify the ID of the CPFS for AI Computing file system to mount.

    bmcpfs-0115******13q5

  6. View the created PVC and PV.

    On the Persistent Volume Claims page and Persistent Volumes page, you can see the newly created PVC and PV, and confirm that they are bound to each other.

Step 2: Create an application and mount the CPFS volume

kubectl

  1. Create a file named cpfs-test.yaml using the following YAML content.

    GPU-accelerated application

    The following YAML example creates a Deployment that consists of two pods. Both pods declare that they want to use GPU computing power through the alibabacloud.com/compute-class: gpu label, and request storage resources through a PVC named cpfs-test. The mount path for both is /data.

    Note

    For information about specific GPU models, see Specify GPU models and driver versions for ACS GPU-accelerated pods.

    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: cpfs-test
      labels:
        app: cpfs-test
    spec:
      replicas: 2
      selector:
        matchLabels:
          app: cpfs-test
      template:
        metadata:
          labels:
            app: cpfs-test
            # Set the compute class to GPU
            alibabacloud.com/compute-class: gpu
            # Specify the GPU model, fill in as needed, such as T4
            alibabacloud.com/gpu-model-series: T4
            alibabacloud.com/compute-qos: default
        spec:
          containers:
          - name: nginx
            image: registry.cn-hangzhou.aliyuncs.com/acs-sample/nginx:latest
            ports:
            - containerPort: 80
            volumeMounts:
              - name: pvc-cpfs
                mountPath: /data
          volumes:
            - name: pvc-cpfs
              persistentVolumeClaim:
                claimName: cpfs-test

    CPU-accelerated application

    The following YAML example creates a Deployment that consists of two pods. It requests storage resources through a PVC named cpfs-test. The mount path for both is /data.

    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: cpfs-test
      labels:
        app: cpfs-test
    spec:
      replicas: 2
      selector:
        matchLabels:
          app: cpfs-test
      template:
        metadata:
          labels:
            app: cpfs-test
        spec:
          containers:
          - name: nginx
            image: registry.cn-hangzhou.aliyuncs.com/acs-sample/nginx:latest
            ports:
            - containerPort: 80
            volumeMounts:
              - name: pvc-cpfs
                mountPath: /data
          volumes:
            - name: pvc-cpfs
              persistentVolumeClaim:
                claimName: cpfs-test
  2. Create the Deployment and mount the CPFS volume.

    kubectl create -f cpfs-test.yaml
  3. View the deployment status of the pods in the Deployment.

    kubectl get pod | grep cpfs-test

    Expected output shows that two pods have been created:

    cpfs-test-****-***a   1/1     Running   0          45s
    cpfs-test-****-***b   1/1     Running   0          45s
  4. View the mount path.

    The example command below is expected to return the data in the mounted directory of the CPFS for LINGJUN file system. By default, it is empty.

    kubectl exec cpfs-test-****-***a -- ls /data

Console

  1. In the left-side navigation pane of the cluster management page, choose Workloads > Deployments.

  2. On the Deployments page, click Create from Image.

  3. Configure the Deployment parameters and click Create.

    Take note of the following parameters. Keep the default settings for other parameters. For more information, see Create a stateless application using a Deployment.

    GPU-accelerated application

    Configuration page

    Parameter

    Description

    Example

    Basic Information

    Name:

    The name of the Deployment. Enter a custom name. The name must follow the format requirements displayed on the interface.

    cpfs-test

    Replicas:

    Configure the number of replicas for the Deployment.

    2

    Type

    Select the compute type for the pod.

    Note

    For information about specific GPU models, see Specify GPU models and driver versions for ACS GPU-accelerated pods.

    GPU, T4

    Container

    Image Name:

    Enter the image address used to deploy the application.

    registry.cn-hangzhou.aliyuncs.com/acs-sample/nginx:latest

    Required Resources

    Set the required GPU, vCPU, and memory resources.

    • GPU: 1

    • CPU: 2 vCPUs

    • Memory: 2 GiB

    Volume:

    Click Add PVC, and then configure the parameters.

    • Mount Source: Select the PVC you created earlier.

    • Container Path: Enter the container path to which the CPFS file system will be mounted.

    • Mount source: pvc-cpfs

    • Container path: /data

    CPU-accelerated application

    Configuration page

    Parameter

    Description

    Example

    Basic Information

    Application Name

    The name of the Deployment. Enter a custom name. The name must follow the format requirements displayed on the interface.

    cpfs-test

    Replicas

    Configure the number of replicas for the Deployment.

    2

    Type

    Select the compute type for the pod.

    CPU, general-purpose

    Container

    Image Name

    Enter the image address used to deploy the application.

    registry.cn-hangzhou.aliyuncs.com/acs-sample/nginx:latest

    Required Resources

    Set the required vCPU and memory resources.

    • CPU: 0.25 vCPUs

    • Memory: 0.5 GiB

    Volume

    Click Add PVC, and then configure the parameters.

    • Mount Source: Select the PVC you created earlier.

    • Container Path: Enter the container path to which the CPFS file system will be mounted.

    • Mount source: pvc-cpfs

    • Container path: /data

  4. View the application deployment status.

    1. On the Deployments page, click the application name.

    2. On the Pods tab, confirm that the pods are running normally (status is Running).

Verify shared storage and persistent storage

The Deployment created according to the example above contains two pods, both of which mount the same CPFS file system. You can verify this in the following ways:

  • Create a file in one pod, and then view the file in the other pod to verify shared storage.

  • Recreate the Deployment, and then check whether the data in the file system still exists in the newly created pods to verify persistent storage.

  1. View the pod information.

    kubectl get pod | grep cpfs-test

    Expected output:

    cpfs-test-****-***a   1/1     Running   0          45s
    cpfs-test-****-***b   1/1     Running   0          45s
  2. Verify shared storage.

    1. Create a file in one pod.

      Using the pod named cpfs-test-****-***a as an example:

      kubectl exec cpfs-test-****-***a -- touch /data/test.txt
    2. View the file in the other pod.

      Using the pod named cpfs-test-****-***b as an example:

      kubectl exec cpfs-test-****-***b -- ls /data

      The expected output is as follows, showing that the newly created file test.txt is shared:

      test.txt
  3. Verify persistent storage.

    1. Recreate the Deployment.

      kubectl rollout restart deploy cpfs-test
    2. View the pods and wait for the new pods to be created successfully.

      kubectl get pod | grep cpfs-test

      Expected output:

      cpfs-test-****-***c   1/1     Running   0          78s
      cpfs-test-****-***d   1/1     Running   0          52s
    3. Check whether the data in the file system still exists in the new pod.

      Using the pod named cpfs-test-c*** as an example:

      kubectl exec cpfs-test-****-***c -- ls /data

      The expected output is as follows, showing that the data in the CPFS file system still exists and can be retrieved from the mount directory of the new pod:

      test.txt