All Products
Search
Document Center

Cloud Parallel File Storage:Mount a CPFS for Lingjun in ACS

Last Updated:Mar 25, 2026

This topic describes how to mount a CPFS for Lingjun file system in Container Service for Kubernetes (ACS) so that multiple pods can share data on the same file system.

Prerequisites

  • You have created a CPFS for Lingjun file system. For more information, see Create a file system.

  • The managed-csiprovisioner component is installed in the ACS cluster.

    Note

    Go to the ACS cluster management page in the ACS console. In the left-side navigation pane of the cluster management page, choose Operations > Add-ons. On the Storage tab, you can check whether managed-csiprovisioner is installed.

Step 1: Determine the mount method

Determine the network type and complete the prerequisites based on your compute resource type.

Resource

Network type

Prerequisites

GPU pod (RDMA-supported: GU8TF, GU8TEF, L20X, P16EN, etc.)

RDMA

Ensure the CPFS for Lingjun file system and the Lingjun GPU are in the same zone and cluster.

GPU pod (RDMA-unsupported: L20, G49E, T4, A10, G59, etc.)

VPC

Create a VPC mount target

CPU pod

VPC

Create a VPC mount target

If you are unsure about your GPU type, see Supported GPU models in ACS.

Step 2: Create a PV and a PVC

CPFS for Lingjun uses static provisioning, which requires you to manually create a persistent volume (PV) and a persistent volume claim (PVC).

  • PV (persistent volume): A piece of storage in the cluster. It defines the connection details for the CPFS file system, such as the file system ID, network type, and mount path.

  • PVC (persistent volume claim): A request for storage made by a pod. Using a PVC decouples the application from the underlying storage infrastructure.

Choose the kubectl or console method based on your preference.

Kubectl

Choose the configuration that matches the network type you determined in Step 1.

RDMA network

This method applies to Lingjun GPUs, such as GU8TF, GU8TEF, L20X, and P16EN.

  1. Create a file named cpfs-pv-pvc.yaml to define the PV and PVC.

    apiVersion: v1
    kind: PersistentVolume
    metadata:
      name: cpfs-test
      labels:
        alicloud-pvname: cpfs-test
    spec:
      accessModes:
      - ReadWriteMany
      capacity:
        storage: 10Ti
      csi:
        driver: bmcpfsplugin.csi.alibabacloud.com
        volumeAttributes:
          filesystemId: bmcpfs-*****
          path: /
        volumeHandle: bmcpfs-*****
    ---
    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
      name: cpfs-test
    spec:
      accessModes:
      - ReadWriteMany
      selector:
        matchLabels:
          alicloud-pvname: cpfs-test
      resources:
        requests:
          storage: 10Ti

    Parameter description

    • PV parameters

      Parameter

      Description

      labels

      Set the Label, so that a PVC can use a selector to match and bind.

      accessModes

      The access mode of the PV.

      capacity.storage

      The capacity of the volume.

      csi.driver

      Set Driver type to povplugin.csi.alibabacloud.com.

      csi.volumeAttributes

      The attributes of the CPFS volume.

      • filesystemId: The ID of the CPFS for LINGJUN file system.

      • path: The default value is /, which indicates the root directory of the CPFS file system. You can also specify a subdirectory, such as /dir. If the subdirectory does not exist, it is automatically created during the mount process.

      csi.volumeHandle

      The ID of the CPFS for LINGJUN file system.

    • PVC parameters

      Parameter

      Description

      accessModes

      The access mode that the PVC requests from the PV.

      selector

      Uses the labels on the PV to find and bind to a matching PV.

      resources.requests.storage

      The amount of storage requested for the pod. This value cannot exceed the PV capacity.

  2. Run the following command to create the resources:

    kubectl create -f cpfs-pv-pvc.yaml
  3. Verify that the PVC is bound to the PV.

    kubectl get pvc cpfs-test

    The Bound status in the output indicates that the PVC has been successfully bound:

    NAME        STATUS   VOLUME           CAPACITY   ACCESS MODES   STORAGECLASS    VOLUMEATTRIBUTESCLASS   AGE
    cpfs-test   Bound    cpfs-test        10Ti       RWX            <unset>         <unset>                 10s

VPC network

This method applies to CPU pods and general-purpose GPU pods, such as T4 and A10.

  1. Create a file named cpfs-pv-pvc.yaml to define the PV and PVC.

    apiVersion: v1
    kind: PersistentVolume
    metadata:
      name: cpfs-test
      labels:
        alicloud-pvname: cpfs-test
    spec:
      accessModes:
      - ReadWriteMany
      capacity:
        storage: 10Ti
      csi:
        driver: nasplugin.csi.alibabacloud.com
        volumeAttributes:
          mountProtocol: efc
          server: cpfs-***-vpc-***.cn-wulanchabu.cpfs.aliyuncs.com
          path: /
        volumeHandle: bmcpfs-*****
    ---
    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
      name: cpfs-test
    spec:
      accessModes:
      - ReadWriteMany
      selector:
        matchLabels:
          alicloud-pvname: cpfs-test
      resources:
        requests:
          storage: 10Ti

    Parameter description

    • PV parameters

      Parameter

      Description

      labels

      Set a label so that the PVC can use a selector to match and bind.

      accessModes

      The access mode of the PV.

      capacity.storage

      The capacity of the volume.

      csi.driver

      Set Driver type to nasplugin.csi.alibabacloud.com.

      csi.volumeAttributes

      The attributes of the CPFS volume.

      • mountProtocol: The mount protocol. Set to efc.

      • server: The domain name of the VPC mount target for the CPFS file system.

      • path: Defaults to /, which is the root directory of the mounted CPFS file system. You can also specify a subdirectory, such as /dir.

      csi.volumeHandle

      The ID of the CPFS for LINGJUN file system.

    • PVC parameters

      Parameter

      Description

      accessModes

      The access mode that the PVC requests from the PV.

      selector

      Uses the labels on the PV to find and bind to a matching PV.

      resources.requests.storage

      The amount of storage requested for the pod. This value cannot exceed the PV capacity.

  2. Run the following command to create the resources:

    kubectl create -f cpfs-pv-pvc.yaml
  3. Verify that the PVC is bound to the PV.

    kubectl get pvc cpfs-test

    The Bound status in the output indicates that the PVC has been successfully bound:

    NAME        STATUS   VOLUME           CAPACITY   ACCESS MODES   STORAGECLASS    VOLUMEATTRIBUTESCLASS   AGE
    cpfs-test   Bound    cpfs-test        10Ti       RWX            <unset>         <unset>                 10s

Console

The console method supports mounting over an RDMA network only. To use a VPC network, you must use kubectl.
  1. Log on to the ACS console.

  2. On the Clusters, click the name of the cluster to go to the cluster management page.

  3. In the left-side navigation pane, choose Volumes > Persistent Volume Claims, and then click Create. Configure the following parameters.

    Parameter

    Description

    Example

    PVC type

    Select CPFS.

    CPFS

    Parameter

    Enter a custom name for the PVC. Follow the format requirements shown on the screen.

    cpfs-pvc

    Allocation mode

    Select either Use an existing PV or Create a PV.

    Create a PV

    CPFS type

    Select CPFS for Lingjun.

    CPFS for Lingjun

    Access mode

    Supports ReadWriteMany and ReadWriteOnce.

    ReadWriteMany

    File system ID

    The ID of the CPFS for Lingjun file system to mount.

    bmcpfs-0115******13q5

  4. View the created PV and PVC.

    The new resources appear on the Persistent Volume Claims and Persistent Volumes pages. Verify that they are bound.

Step 3: Create an app and mount CPFS

Create a Deployment that references the PVC to mount the storage to a specified directory in the container. Follow the instructions for either the kubectl or the console method.

Kubectl

Choose the configuration that matches your application type.

GPU application

This method applies to both Lingjun GPUs, such as GU8TF, and general-purpose GPUs, such as T4 and A10.

  1. Create a file named cpfs-test.yaml to define the Deployment and reference the PVC.

    This YAML manifest creates a two-pod Deployment. Both pods use the alibabacloud.com/compute-class: gpu label to request GPU resources. They also claim storage from the PVC named cpfs-test and mount the volume to the /data path.

    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: cpfs-test
      labels:
        app: cpfs-test
    spec:
      replicas: 2
      selector:
        matchLabels:
          app: cpfs-test
      template:
        metadata:
          labels:
            app: cpfs-test
            # Specify the compute type as GPU.
            alibabacloud.com/compute-class: gpu
            # Specify the GPU model, such as T4.
            alibabacloud.com/gpu-model-series: T4
            alibabacloud.com/compute-qos: default
        spec:
          containers:
          - name: nginx
            image: registry.cn-hangzhou.aliyuncs.com/acs-sample/nginx:latest
            ports:
            - containerPort: 80
            volumeMounts:
              - name: pvc-cpfs
                mountPath: /data
          volumes:
            - name: pvc-cpfs
              persistentVolumeClaim:
                claimName: cpfs-test    # [Deployment] Must match the PVC name.
  2. Create the application.

    kubectl create -f cpfs-test.yaml
  3. Verify that the pods are running successfully.

    kubectl get pod | grep cpfs-test

    The output shows two pods in the Running state:

    cpfs-test-****-***a   1/1     Running   0          45s
    cpfs-test-****-***b   1/1     Running   0          45s
  4. Verify that the mount was successful.

    List the contents of the mount directory in a pod. The directory is empty by default.

    kubectl exec cpfs-test-****-***a -- ls /data

CPU application

  1. Create a file named cpfs-test.yaml.

    The following YAML manifest creates a Deployment with two pods. Both pods request storage from the PVC named cpfs-test and mount it to the /data path.

    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: cpfs-test
      labels:
        app: cpfs-test
    spec:
      replicas: 2
      selector:
        matchLabels:
          app: cpfs-test
      template:
        metadata:
          labels:
            app: cpfs-test
        spec:
          containers:
          - name: nginx
            image: registry.cn-hangzhou.aliyuncs.com/acs-sample/nginx:latest
            ports:
            - containerPort: 80
            volumeMounts:
              - name: pvc-cpfs
                mountPath: /data
          volumes:
            - name: pvc-cpfs
              persistentVolumeClaim:
                claimName: cpfs-test     # [Deployment] Must match the PVC name.
  2. Create the Deployment.

    kubectl create -f cpfs-test.yaml
  3. Check the status of the pods in the Deployment.

    kubectl get pod | grep cpfs-test

    The output shows two pods in the Running state:

    cpfs-test-****-***a   1/1     Running   0          45s
    cpfs-test-****-***b   1/1     Running   0          45s
  4. Check the mount path.

    List the contents of the mount directory in a pod. The directory is empty by default.

    kubectl exec cpfs-test-****-***a -- ls /data

Console

  1. In the left-side navigation pane of the cluster management page, choose Workloads > Deployments.

  2. On the Deployments page, click Create from Image.

  3. Configure the parameters for the Deployment and click Create.

    Note the following parameters and keep the default values for others. For more information, see Create a stateless application from a Deployment.

    GPU application

    Configuration section

    Parameter

    Description

    Example

    Basic Information

    Application name

    A custom name for the Deployment. For naming conventions, see the console prompt.

    cpfs-test

    Replicas

    The number of pods for the Deployment.

    2

    Type

    The compute type of the pod.

    Note

    For information about specific GPU models, see Specify GPU models and driver versions for ACS pods.

    GPU, T4

    Container

    Image name

    The address of the image used to deploy the application.

    registry.cn-hangzhou.aliyuncs.com/acs-sample/nginx:latest

    Required resources

    The amount of GPU, vCPU, and memory resources to request.

    • GPU: 1

    • CPU: 2 vCPU

    • Memory: 2 GiB

    Volume

    Click Add PVC and configure the parameters.

    • Mount Source: Select the PVC that you created.

    • Container path: The mount path inside the container.

    • Mount Source: pvc-cpfs

    • Container Path: /data

    CPU application

    Configuration section

    Parameter

    Description

    Example

    Basic Information

    Application name

    A custom name for the Deployment. For naming conventions, see the console prompt.

    cpfs-test

    Replicas

    The number of pods for the Deployment.

    2

    Type

    The compute type of the pod.

    CPU, General Purpose

    Container

    Image name

    The address of the image used to deploy the application.

    registry.cn-hangzhou.aliyuncs.com/acs-sample/nginx:latest

    Required resources

    The amount of vCPU and memory resources to request.

    • CPU: 0.25 vCPU

    • Memory: 0.5 GiB

    Volume

    Click Add PVC and configure the parameters.

    • Mount Source: Select the PVC that you created.

    • Container path: The mount path inside the container.

    • Mount Source: pvc-cpfs

    • Container Path: /data

  4. Check the status of the application.

    1. On the Deployments page, click the name of the application.

    2. Click the Pods tab and verify that the pods are in the Running state.

Step 4: Verify the mount results

The Deployment created in the example contains two pods. Both pods mount the same CPFS file system. You can verify this setup as follows:

To verify persistent storage, restart the Deployment and then check that the data in the file system exists in the new pod.

  1. Check the pod information.

    kubectl get pod | grep cpfs-test

    The following output is an example:

    cpfs-test-****-***a   1/1     Running   0          45s
    cpfs-test-****-***b   1/1     Running   0          45s
  2. Verify shared storage.

    1. Create a file in one of the pods.

      This example uses the pod named cpfs-test-****-***a:

      kubectl exec cpfs-test-****-***a -- touch /data/test.txt
    2. View the file in the other pod.

      This example uses the pod named cpfs-test-****-***b:

      kubectl exec cpfs-test-****-***b -- ls /data

      The expected output shows that the new file test.txt is shared.

      test.txt
  3. Verify persistent storage.

    1. Restart the Deployment.

      kubectl rollout restart deploy cpfs-test
    2. Check the pods and wait for the new pods to be created successfully.

      kubectl get pod | grep cpfs-test

      The following output is an example:

      cpfs-test-****-***c   1/1     Running   0          78s
      cpfs-test-****-***d   1/1     Running   0          52s
    3. In a new pod, check whether the data in the file system exists.

      This example uses the pod named cpfs-test-****-***c:

      kubectl exec cpfs-test-****-***c -- ls /data

      The expected output shows that the data in the CPFS file system persists. You can retrieve the data from the mount directory in the new pod.

      test.txt