All Products
Search
Document Center

Container Service for Kubernetes:Mount NAS via EFC client with CNFS

Last Updated:Mar 26, 2026

The Elastic File Client (EFC) accelerates access to File Storage NAS by using distributed caching. It supports high-concurrency and large-scale parallel data access, making it a strong fit for data-intensive containerized workloads such as big data analytics, AI training, and inference. Compared to the traditional Network File System (NFS) protocol, EFC delivers significantly improved I/O performance and lower latency. This topic explains how to mount a NAS file system using EFC through the Container Network File System (CNFS).

Important

EFC requires nodes running Alibaba Cloud Linux 3 or ContainerOS with kernel version 5.10.134-17.2 or later. Nodes that do not meet these requirements automatically fall back to the NFS protocol for mounting. Verify node compatibility before proceeding.

How EFC works

EFC is a user-space, POSIX-compliant client built on FUSE (Filesystem in Userspace). It replaces kernel-mode NFS clients and provides four core capabilities:

  • Multi-connection access

  • Metadata and data caching

  • Distributed read-only caching

  • Integrated monitoring via Managed Service for Prometheus

Compared to kernel-mode NFS clients:

CapabilityDescription
Strong data consistencyFiles and directories stay consistent through a distributed locking mechanism. Written files are immediately visible to all other clients, and newly created files sync across nodes instantly.
Single-node read/write cachingEFC caches data in a portion of compute node memory, improving small-file throughput and delivering over 50% I/O performance gains versus traditional NFS clients.
Distributed read-only cachingBuilds an auto-scaling, O&M-free distributed cache pool using memory across multiple nodes.
Small file prefetchingAutomatically identifies and prefetches hot directories and files to reduce data-pull overhead.
Hot upgrade and failoverClient updates apply without restarting applications. If the client fails, automatic failover prevents service interruption.

Prerequisites

Before you begin, ensure that you have:

  • The cnfs-nas-daemon add-on installed in your cluster, with the AlinasMountProxy=true flag set in the FeatureGate of the csi-plugin. See Manage cnfs-nas-daemon for details.

  • Nodes running Alibaba Cloud Linux 3 or ContainerOS, with kernel version 5.10.134-17.2 or later.

(Optional) Deploy the CNFS-EFC distributed cache

The distributed cache is optional, but it enables EFC's highest-performance capabilities: distributed caching and small-file prefetching. If you deploy it, activate these capabilities by adding mountOptions to your PersistentVolume (PV). If you skip this section, omit the mountOptions block from the PV configuration in the next section.

Deploy the cache DaemonSet

  1. Save the following YAML as csi-configmap.yaml. This ConfigMap instructs csi-plugin to automatically provision a cache DaemonSet and associated Services. The DaemonSet schedules pods on nodes labeled cache=true. Each pod runs three containers and mounts a 15 GiB tmpfs volume (memory-backed). Adjust the parameters to match your cluster's resources.

    ParameterDescriptionAccepted valuesDefault
    enableEnables distributed caching.true, false
    container-numberNumber of containers per cache pod. Increase this value when you hit cache performance bottlenecks.Positive integer3
    volume-typeStorage medium for the emptyDir volume mounted to each cache pod. Make sure resource usage does not impact production workloads.disk, memorymemory
    volume-sizeVolume size allocated per cache pod.Size in GiB (e.g., 15Gi)15Gi
    node-selectorNode labels used to schedule the cache DaemonSet. If not set, the DaemonSet runs on all nodes.Key-value label (e.g., cache=true)
    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: csi-plugin
      namespace: kube-system
    data:
      nas-efc-cache: |
        enable=true
        container-number=3
        volume-type=memory
        volume-size=15Gi
      node-selector: |
        cache=true
  2. Apply the ConfigMap:

    kubectl apply -f csi-configmap.yaml

Verify the distributed cache

  1. Check the cache DaemonSet status:

    kubectl get ds/cnfs-cache-ds -n kube-system -o wide

    The expected output looks similar to:

    NAME            DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR   AGE   CONTAINERS                                                            IMAGES                                                                                                                                                                                   SELECTOR
    cnfs-cache-ds   2         2         2       2            2           cache=true      13m   alinas-dadi-container,alinas-dadi-container1,alinas-dadi-container2   registry-cn-hangzhou.ack.aliyuncs.com/acs/nas-cache:20220420,registry-cn-hangzhou.ack.aliyuncs.com/acs/nas-cache:20220420,registry-cn-hangzhou.ack.aliyuncs.com/acs/nas-cache:20220420   app=cnfs-cache-ds

    In this example, two nodes carry the cache=true label, and both DaemonSet pods are ready. If pods are not reaching the Ready state, run kubectl describe pod -n kube-system <pod-name> to inspect pod events for troubleshooting details.

  2. Verify service endpoint discovery:

    kubectl get ep cnfs-cache-ds-service -n kube-system -o wide

    The expected output looks similar to:

    NAME                    ENDPOINTS                                                              AGE
    cnfs-cache-ds-service   192.168.3.217:6500,192.168.5.247:6500,192.168.3.217:6502 + 3 more...   2d3h

    If no endpoints appear, confirm that pods in the cnfs-cache-ds DaemonSet are running and that the Service selector matches pod labels.

Configure CNFS to use the EFC client

Step 1: Create a CNFS resource

This section shows how to configure an existing NAS file system to use the EFC client. To create a new NAS file system through CNFS with EFC, add useClient: EFCClient in the parameters section when creating the ContainerNetworkFileSystem resource. See Use CNFS to manage NAS file systems (recommended) for details.

  1. Save the following YAML as cnfs-efc.yaml.

    ParameterDescription
    descriptionA description for the file system.
    typeThe volume type. Set to nas for NAS file systems.
    reclaimPolicyOnly Retain is supported. Deleting the CNFS resource does not delete the underlying NAS file system.
    serverThe mount target address. The mount target must be in the same VPC as your pods and its status must be Ready. For best performance, use the same vSwitch for both the mount target and pods. If no existing mount target meets these requirements, create a new one.
    useClientSet to EFCClient to enable the EFC client.
    apiVersion: storage.alibabacloud.com/v1beta1
    kind: ContainerNetworkFileSystem
    metadata:
      name: cnfs-efc-test
    spec:
      description: "cnfs"
      type: nas
      reclaimPolicy: Retain
      parameters:
        server: 17f7e4****-h****.cn-beijing.nas.aliyuncs.com
        useClient: EFCClient
  2. Create the CNFS resource:

    kubectl create -f cnfs-efc.yaml

Step 2: Create a PersistentVolume and PersistentVolumeClaim

Save the following YAML as cnfs-pv-pvc.yaml.

  • If you deployed the CNFS-EFC distributed cache: keep the mountOptions block as shown. g_tier_EnableClusterCache=true enables distributed caching, and g_tier_EnableClusterCachePrefetch=true enables prefetching of hot files and directories.

  • If you skipped the distributed cache: remove the mountOptions block entirely before applying.

apiVersion: v1
kind: PersistentVolume
metadata:
  name: efc-pv
spec:
  accessModes:
  - ReadWriteMany
  capacity:
    storage: 50Gi
  claimRef:
    name: efc-pvc
    namespace: default
  csi:
    driver: nasplugin.csi.alibabacloud.com
    volumeAttributes:
      containerNetworkFileSystem: cnfs-efc-test
      path: /
    volumeHandle: efc-pv
  mountOptions:
  - g_tier_EnableClusterCache=true         # Enable distributed caching (requires deployed cache DaemonSet)
  - g_tier_EnableClusterCachePrefetch=true # Enable prefetching of hot files and directories
  volumeMode: Filesystem
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: efc-pvc
  namespace: default
spec:
  accessModes:
  - ReadWriteMany
  resources:
    requests:
      storage: 50Gi
  storageClassName: ""
  volumeMode: Filesystem
  volumeName: efc-pv

Apply the configuration:

kubectl create -f cnfs-pv-pvc.yaml

Mount NAS to a workload

Step 1: Create a Deployment

Save the following YAML as cnfs-deployment.yaml.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: efc-test
  namespace: default
spec:
  replicas: 1
  selector:
    matchLabels:
      app: efc-test
  template:
    metadata:
      labels:
        app: efc-test
    spec:
      containers:
      - command:
        - sh
        - -c
        - |
          sleep infinity
        image: alibaba-cloud-linux-3-registry.cn-hangzhou.cr.aliyuncs.com/alinux3/alinux3:latest
        name: test
        volumeMounts:
        - mountPath: /mnt
          name: pvc
      volumes:
      - name: pvc
        persistentVolumeClaim:
          claimName: efc-pvc

Apply the configuration:

kubectl create -f cnfs-deployment.yaml

Step 2: Verify the EFC mount

  1. Check that the pod is running:

    kubectl get pod -l app=efc-test

    Expected output:

    NAME                       READY   STATUS    RESTARTS   AGE
    efc-test-f545b86d6-spr7p   1/1     Running   0          29m
  2. Once the pod reaches the Running state, confirm the EFC mount point inside the pod:

    kubectl exec <pod-name> -- mount -t fuse.aliyun-alinas-efc

    Expected output:

    bindroot-3889a-8TzEY5mc:3d2804****-w****.cn-shanghai.nas.aliyuncs.com:/ on /mnt type fuse.aliyun-alinas-efc (rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=1048576)

    The mount type fuse.aliyun-alinas-efc confirms EFC is active. If the output shows nfs instead, the node does not meet the OS or kernel requirements and has fallen back to the NFS protocol. Check the node OS version and kernel version against the prerequisites.

What's next