The FlexVolume plug-in is deprecated and no longer supported in new Container Service for Kubernetes (ACK) clusters. Migrate your statically provisioned File Storage NAS (NAS) persistent volumes (PVs) and persistent volume claims (PVCs) from FlexVolume to Container Storage Interface (CSI) to maintain plug-in support and access new storage features.
If your cluster also has disk volumes managed by FlexVolume, use csi-compatible-controller instead.
Impacts
PVCs are recreated during migration, which causes pods to restart.
Pod restarts interrupt running workloads. Perform the migration during off-peak hours.
FlexVolume and CSI comparison
| Attribute | CSI | FlexVolume |
|---|---|---|
| Components | CSI-Provisioner (Deployment) -- automatic volume creation, snapshot creation, CNFS storage, data restoration after accidental deletion. CSI-Plugin (DaemonSet) -- automatic volume mounting and unmounting. Supports disk, NAS, and OSS volumes by default. | Disk-Controller (Deployment) -- automatic volume creation. FlexVolume (DaemonSet) -- volume mounting and unmounting. Supports disk, NAS, and OSS volumes by default. |
| kubelet parameter | enable-controller-attach-detach must be set to true on each node. | enable-controller-attach-detach must be set to true on each node. |
| References | Storage | FlexVolume overview |
Prerequisites
Install the CSI plug-in in your cluster before you begin.
Create files named
csi-plugin.yamlandcsi-provisioner.yaml.Deploy csi-plugin and csi-provisioner in the cluster:
kubectl apply -f csi-plugin.yaml -f csi-provisioner.yamlVerify that CSI is running: Expected output: If all pods show
Running, CSI is installed.kubectl get pods -nkube-system | grep csicsi-plugin-577mm 4/4 Running 0 3d20h csi-plugin-k9mzt 4/4 Running 0 41d csi-provisioner-6b58f46989-8wwl5 9/9 Running 0 41d csi-provisioner-6b58f46989-qzh8l 9/9 Running 0 6d20h
Procedure
The following example demonstrates migration for a StatefulSet workload. The figure below shows the workflow:

Step 1: Check the volume status
Query the pod status: Expected output:
kubectl get podNAME READY STATUS RESTARTS AGE nas-static-1 1/1 Running 0 11mFind the PVC name used by the pod: Expected output:
kubectl describe pod nas-static-1 |grep ClaimNameClaimName: nas-pvcCheck the PVC status: Expected output:
kubectl get pvcNAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE nas-pvc Bound nas-pv 512Gi RWX 7m23s
Step 2: Create a CSI-managed PV and PVC
Create a new PV and PVC managed by CSI that point to the same NAS file system. Choose one of the following options.
Option A: Automated conversion (FlexVolume2CSI CLI)
Use the FlexVolume2CSI CLI to convert the FlexVolume-managed PV and PVC definitions to CSI-managed equivalents. The tool generates a file named
nas-pv-pvc-csi.yaml.Apply the generated file:
kubectl apply -f nas-pv-pvc-csi.yamlVerify the PVC status: Expected output:
kubectl get pvcNAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE nas-pvc Bound nas-pv 512Gi RWX nas 30m nas-pvc-csi Bound nas-pv-csi 512Gi RWX nas 2s
Option B: Manual conversion
Back up the existing FlexVolume PV and PVC definitions. Save the PVC: Expected output: Save the PV: Expected output:
kubectl get pvc nas-pvc -oyaml > nas-pvc-flexvolume.yaml cat nas-pvc-flexvolume.yamlapiVersion: v1 kind: PersistentVolumeClaim metadata: name: nas-pvc namespace: default spec: accessModes: - ReadWriteMany resources: requests: storage: 512Gi selector: matchLabels: alicloud-pvname: nas-pv storageClassName: naskubectl get pv nas-pv -oyaml > nas-pv-flexvolume.yaml cat nas-pv-flexvolume.yamlapiVersion: v1 kind: PersistentVolume metadata: labels: alicloud-pvname: nas-pv name: nas-pv spec: accessModes: - ReadWriteMany capacity: storage: 512Gi flexVolume: driver: alicloud/nas options: path: /aliyun server: ***.***.nas.aliyuncs.com vers: "3" persistentVolumeReclaimPolicy: Retain storageClassName: nasCreate a CSI-managed PV and PVC. Create a file named
nas-pv-pvc-csi.yamlwith the following content: Apply the file: Verify the PVC status: Expected output:apiVersion: v1 kind: PersistentVolumeClaim metadata: name: nas-pvc-csi namespace: default spec: accessModes: - ReadWriteMany resources: requests: storage: 512Gi selector: matchLabels: alicloud-pvname: nas-pv-csi storageClassName: nas --- apiVersion: v1 kind: PersistentVolume metadata: labels: alicloud-pvname: nas-pv-csi name: nas-pv-csi spec: accessModes: - ReadWriteMany capacity: storage: 512Gi csi: driver: nasplugin.csi.alibabacloud.com volumeHandle: nas-pv-csi volumeAttributes: server: "***.***.nas.aliyuncs.com" path: "/aliyun" mountOptions: - nolock,tcp,noresvport - vers=3 persistentVolumeReclaimPolicy: Retain storageClassName: naskubectl apply -f nas-pv-pvc-csi.yamlkubectl get pvcNAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE nas-pvc Bound nas-pv 512Gi RWX 7m23s
Step 3: Update the application
Edit the StatefulSet configuration:
kubectl edit sts nas-staticUpdate the volume claim to reference the CSI-managed PVC:
volumes: - name: pvc-nas persistentVolumeClaim: claimName: nas-pvc-csiConfirm the pod restarts: Expected output:
kubectl get podNAME READY STATUS RESTARTS AGE nas-static-1 1/1 Running 0 70sVerify that the NAS volume is mounted through CSI: If the migration succeeded, the output shows
kubernetes.io~csiin the mount path:kubectl exec nas-static-1 -- mount |grep nas# View the mount information ***.***.nas.aliyuncs.com:/aliyun on /var/lib/kubelet/pods/ac02ea3f-125f-4b38-9bcf-9b117f62***/volumes/kubernetes.io~csi/nas-pv-csi/mount type nfs (rw,relatime,vers=3,rsize=1048576,wsize=1048576,namlen=255,hard,nolock,noresvport,proto=tcp,timeo=600,retrans=2,sec=sys,mountaddr=192.168.XX.XX,mountvers=3,mountport=2049,mountproto=tcp,local_lock=all,addr=192.168.XX.XX)
Step 4: Uninstall FlexVolume
Log on to the OpenAPI Explorer console and call the UnInstallClusterAddons operation to uninstall the FlexVolume plug-in. For more information, see Uninstall components from a cluster.
ClusterId: The ID of your cluster. Find this on the Basic Information tab of your cluster details page.
name: Set to
flexvolume.
Delete the alicloud-disk-controller and alicloud-nas-controller components:
kubectl delete deploy -n kube-system alicloud-disk-controller alicloud-nas-controllerVerify that FlexVolume is fully uninstalled: If no output is returned, FlexVolume has been completely removed.
kubectl get pods -n kube-system | grep 'flexvolume\|alicloud-disk-controller\|alicloud-nas-controller'Delete the FlexVolume StorageClasses (provisioner:
alicloud/disk): Expected output:kubectl delete storageclass alicloud-disk-available alicloud-disk-efficiency alicloud-disk-essd alicloud-disk-ssdstorageclass.storage.k8s.io "alicloud-disk-available" deleted storageclass.storage.k8s.io "alicloud-disk-efficiency" deleted storageclass.storage.k8s.io "alicloud-disk-essd" deleted storageclass.storage.k8s.io "alicloud-disk-ssd" deleted
Step 5: Install CSI through the API
Log on to the OpenAPI Explorer console and call the InstallClusterAddons operation to install CSI. For more information, see Install a component in an ACK cluster.
ClusterId: The ID of your cluster.
name: Set to
csi-provisioner.version: The latest version is automatically specified. For version details, see csi-provisioner.
Verify that CSI is running: Expected output: If all pods show
Running, CSI is installed.kubectl get pods -nkube-system | grep csicsi-plugin-577mm 4/4 Running 0 3d20h csi-plugin-k9mzt 4/4 Running 0 41d csi-provisioner-6b58f46989-8wwl5 9/9 Running 0 41d csi-provisioner-6b58f46989-qzh8l 9/9 Running 0 6d20h
Step 6: Update node configurations
Deploy a DaemonSet to set the kubelet parameter --enable-controller-attach-detach to true on all existing nodes. Delete the DaemonSet after the update completes.
This DaemonSet restarts kubelet on each node. Evaluate the impact on running applications before you proceed.
Create a file with the following content and apply it with kubectl apply -f <filename>.yaml:
kind: DaemonSet
apiVersion: apps/v1
metadata:
name: kubelet-set
spec:
selector:
matchLabels:
app: kubelet-set
template:
metadata:
labels:
app: kubelet-set
spec:
tolerations:
- operator: "Exists"
hostNetwork: true
hostPID: true
containers:
- name: kubelet-set
securityContext:
privileged: true
capabilities:
add: ["SYS_ADMIN"]
allowPrivilegeEscalation: true
image: registry.cn-hangzhou.aliyuncs.com/acs/csi-plugin:v1.26.5-56d1e30-aliyun
imagePullPolicy: "Always"
env:
- name: enableADController
value: "true"
command: ["sh", "-c"]
args:
- echo "Starting kubelet flag set to $enableADController";
ifFlagTrueNum=`cat /host/etc/systemd/system/kubelet.service.d/10-kubeadm.conf | grep enable-controller-attach-detach=$enableADController | grep -v grep | wc -l`;
echo "ifFlagTrueNum is $ifFlagTrueNum";
if [ "$ifFlagTrueNum" = "0" ]; then
curValue="true";
if [ "$enableADController" = "true" ]; then
curValue="false";
fi;
sed -i "s/enable-controller-attach-detach=$curValue/enable-controller-attach-detach=$enableADController/" /host/etc/systemd/system/kubelet.service.d/10-kubeadm.conf;
restartKubelet="true";
echo "current value is $curValue, change to expect "$enableADController;
fi;
if [ "$restartKubelet" = "true" ]; then
/nsenter --mount=/proc/1/ns/mnt systemctl daemon-reload;
/nsenter --mount=/proc/1/ns/mnt service kubelet restart;
echo "restart kubelet";
fi;
while true;
do
sleep 5;
done;
volumeMounts:
- name: etc
mountPath: /host/etc
volumes:
- name: etc
hostPath:
path: /etc