All Products
Search
Document Center

Container Service for Kubernetes:Manage dynamically provisioned volumes of General-purpose CPFS

Last Updated:Nov 12, 2025

In Alibaba Cloud Container Service for Kubernetes (ACK), you can use Container Network File System (CNFS) to provide storage for applications from a shared General-purpose Cloud Paralleled File System (CPFS) as dynamically provisioned volumes. After you configure a StorageClass, workloads are automatically provisioned with independent persistent volumes (PVs) that are based on isolated CPFS subdirectories. This achieves data isolation and simplifies storage management.

Prerequisites

  • Your cluster must meet the following requirements based on its version.

    For more information, see Upgrade a cluster.

    1.26 and later

    • The Container Storage Interface (CSI) component is version v1.32.2-757e24b-aliyun or later.

      For more information, see Manage CSI components.
    • The cnfs-nas-daemon component is installed. The AlinasMountProxy=true feature gate is added to the csi-plugin component to enable cnfs-nas-daemon. For more information, see Manage the cnfs-nas-daemon component.

    Earlier than 1.26

    • The CSI component is version v1.24.11-5221f79-aliyun or later.

      For more information, see Manage CSI components.
    • Client dependencies are installed and the csi-plugin component is restarted.

      Show procedure

      You can configure the ConfigMap of csi-plugin to automatically install client dependencies when the csi-plugin component starts.

      1. Check whether a ConfigMap named csi-plugin exists.

        kubectl -n kube-system get cm csi-plugin

        Existence

        If a ConfigMap named csi-plugin exists, update it.

        kubectl edit configmap csi-plugin -n kube-system

        Add the cnfs-client-properties field under data and set cpfs-efc=true to install client dependencies.

        ...
        data:
          cnfs-client-properties: |
            cpfs-efc=true   # The relevant dependencies are installed when csi-plugin starts.

        Non-existent

        If a ConfigMap named csi-plugin does not exist, create one in the kube-system namespace.

        cat <<EOF | kubectl apply -f -
        apiVersion: v1
        kind: ConfigMap
        metadata:
          name: csi-plugin
          namespace: kube-system
        data:
          cnfs-client-properties: |
            cpfs-efc=true   # The relevant dependencies are installed when csi-plugin starts.
        EOF
      2. Restart the csi-plugin component.

        kubectl -n kube-system rollout restart daemonset csi-plugin
  • A General-purpose CPFS file system and a corresponding protocol service have been created in the same Virtual Private Cloud (VPC) as the cluster, and the mount target address has been obtained. For more information, see Create a protocol service and obtain a mount target address.

    For optimal performance, create the mount target of the CPFS protocol service and the cluster in the same vSwitch.

Step 1: Use an NFS client to mount the CPFS file system

  1. Create a CNFS-managed CPFS file system.

    The following command creates a CNFS object and a StorageClass. This configuration allows the NFS client to mount the CPFS file system as an isolated volume through CNFS.

    cat << EOF | kubectl apply -f -
    apiVersion: storage.alibabacloud.com/v1beta1
    kind: ContainerNetworkFileSystem
    metadata:
      name: cnfs-nfs-cpfs
    spec:
      type: cpfs
      reclaimPolicy: Retain
      parameters:
        protocolServer: cpfs-xxxx.xxxx.cpfs.aliyuncs.com # Enter the domain name of the mount target for the CPFS protocol service.
        useClient: NFSClient  # Use the NFS client to mount.
    ---
    apiVersion: storage.k8s.io/v1
    kind: StorageClass
    metadata:
      name: cnfs-nfs-cpfs-sc
    mountOptions:
      - nolock,tcp,noresvport
      - vers=3
    parameters:
      volumeAs: subpath
      containerNetworkFileSystem: "cnfs-nfs-cpfs"  # Reference the created CNFS object cnfs-nfs-cpfs.
      path: "/share"
      archiveOnDelete: "true"
    provisioner: nasplugin.csi.alibabacloud.com
    reclaimPolicy: Retain
    allowVolumeExpansion: true
    EOF
    • CNFS

      Parameter

      Description

      type

      The type of volume to create. In this example, it is cpfs.

      reclaimPolicy

      The reclaim policy. Only Retain is supported. The CPFS file system is not deleted when the CNFS object is deleted.

      parameters

      protocolServer

      The domain name of the mount target for the General-purpose CPFS file system's protocol service.

      useClient

      Set to NFSClient, which indicates that the NFS client is used for mounting.

    • StorageClass

      Parameter

      Description

      mountOptions

      The mount parameters. You can use the default values in this topic's example.

      parameters

      volumeAs

      Set to subpath to create a subdirectory-type PV.

      containerNetworkFileSystem

      The name of the associated CNFS object.

      path

      The path that corresponds to the export directory of the General-purpose CPFS protocol service, such as /share. You can also set it to a subdirectory, such as /share/dir.

      Currently, only regular directories are supported. Filesets are not supported.

      parameters.archiveOnDelete

      Specifies whether the data in the backend storage is actually deleted when the PersistentVolumeClaim (PVC) is deleted and the reclaimPolicy is set to Delete.

      General-purpose CPFS is a shared storage. This option is provided for double confirmation.
      • true (default): The directory or file is not deleted. Instead, it is archived and renamed to archived-{pvName}.{timestamp}.

      • false: The corresponding backend directory and its data are deleted.

        This operation deletes the CPFS subpath directory and the files within it. The CPFS file system itself is not deleted.

        To delete the CPFS file system, see Delete a file system.

      reclaimPolicy

      The reclaim policy of the PV.

      • Delete: When the PVC is deleted, the backend storage data is processed based on the archiveOnDelete setting.

      • Retain: When the PVC is deleted, the PV and the CPFS file system are not deleted. You must delete them manually. This policy is suitable for scenarios that require high data security to prevent accidental data deletion.

      allowVolumeExpansion

      Optional. Allows scaling out the CPFS volume.

  2. Create a StatefulSet named cnfs-nfs-cpfs-sts and mount the isolated CPFS volume.

    cat << EOF | kubectl apply -f -
    apiVersion: apps/v1
    kind: StatefulSet
    metadata:
      name: cnfs-nfs-cpfs-sts
    spec:
      selector:
        matchLabels:
          app: nginx
      serviceName: "nginx"
      replicas: 2
      template:
        metadata:
          labels:
            app: nginx
        spec:
          containers:
          - name: nginx
            image: anolis-registry.cn-zhangjiakou.cr.aliyuncs.com/openanolis/nginx:1.14.1-8.6
            volumeMounts:
            - name: pvc
              mountPath: /data
      volumeClaimTemplates:
      - metadata:
          name: pvc
        spec:
          accessModes: [ "ReadWriteOnce" ]
          storageClassName: "cnfs-nfs-cpfs-sc"     # The name of the attached StorageClass. 
          resources:
            requests:
              storage: 50Gi
    EOF
  3. Check the CNFS status.

    kubectl get cnfs cnfs-nfs-cpfs -o jsonpath='{.status.status}' 

    The following output is returned. The Available status indicates that the CNFS object is active.

    Available
  4. Check the status of the persistent volume claims (PVCs).

    kubectl get pvc -o wide | grep cnfs-nfs-cpfs-sc

    The following output is returned. Two PVCs are automatically created and attached to the automatically created PVs.

    pvc-cnfs-nfs-cpfs-sts-0   Bound    nas-804e8cb1-2355-4026-87fc-ee061e14f5f9   50Gi       RWO            cnfs-nfs-cpfs-sc   <unset>                 5m36s   Filesystem
    pvc-cnfs-nfs-cpfs-sts-1   Bound    nas-00baf7ff-75dc-440d-bab1-ea8872f1adea   50Gi       RWO            cnfs-nfs-cpfs-sc   <unset>                 5m25s   Filesystem
  5. Check the pod status.

    kubectl get pod | grep cnfs-nfs-cpfs-sts

    Expected output:

    NAME                  READY   STATUS    RESTARTS   AGE
    cnfs-nfs-cpfs-sts-0   1/1     Running   0          9m22s
    cnfs-nfs-cpfs-sts-1   1/1     Running   0          9m11s
  6. Confirm that the pod has mounted the CPFS volume.

    kubectl exec cnfs-nfs-cpfs-sts-0 -- mount | grep nfs

    The following output is returned. It indicates that CNFS successfully mounted the CPFS file system using the NFS client.

    cpfs-********-********.cn-shanghai.cpfs.aliyuncs.com:/share/nas-804e8cb1-2355-4026-87fc-ee061e14f5f9 on /data type nfs (rw,relatime,vers=3,rsize=1048576,wsize=1048576,namlen=255,hard,nolock,noresvport,proto=tcp,port=30000,timeo=600,retrans=2,sec=sys,mountaddr=127.0.1.255,mountvers=3,mountport=30000,mountproto=tcp,local_lock=all,addr=127.0.1.255)

Step 2: Verify that the volumes are isolated

  1. Write a 1 GB temporary file to one of the pods, such as cnfs-xxx-cpfs-sts-0, and check whether the write operation is successful.

    Replace cnfs-xxx-cpfs-sts-0 with the actual pod name.

    kubectl exec cnfs-xxx-cpfs-sts-0 -- sh -c "dd if=/dev/zero of=/data/1G.tmpfile bs=1G count=1;ls -alrth /data"

    Expected output:

    1+0 records in
    1+0 records out
    1073741824 bytes (1.1 GB, 1.0 GiB) copied, 2.2487 s, 477 MB/s
    total 1.1G
    drwxr-xr-x 1 root root 4.0K Aug  5 08:46 ..
    drwxr-xr-x 2 root root 4.0K Aug  5 09:10 .
    -rw-r--r-- 1 root root 1.0G Aug  5 09:10 1G.tmpfile
  2. In the other pod, cnfs-xxx-cpfs-sts-1, confirm that the 1 GB temporary file does not exist.

    Replace cnfs-xxx-cpfs-sts-1 with the actual pod name.

    kubectl exec cnfs-xxx-cpfs-sts-1 -- sh -c "ls -alrth /data"

    The following output is returned. It shows that the 1 GB temporary file does not exist in the pod named cnfs-xxx-cpfs-sts-1. This indicates that the volumes used by the two pods in the StatefulSet are isolated from each other.

    total 4.5K
    drwxr-xr-x 2 root root 4.0K Aug  5 08:46 .
    drwxr-xr-x 1 root root 4.0K Aug  5 08:46 ..