All Products
Search
Document Center

Container Service for Kubernetes:Upgrade from FlexVolume to CSI for clusters where no data is stored

Last Updated:Jul 08, 2025

The FlexVolume plug-in is deprecated. New Container Service for Kubernetes (ACK) clusters no longer support FlexVolume. For existing clusters, we recommend that you upgrade from FlexVolume to Container Storage Interface (CSI). To upgrade from FlexVolume to CSI, you can uninstall FlexVolume and then install the CSI plug-in, modify node pool configurations, or modify the configurations of existing nodes. This topic describes how to upgrade from FlexVolume to CSI for clusters where no data is stored.

Differences between FlexVolume and CSI

The following table describes the differences between CSI and FlexVolume.

Plug-in

Component

kubelet parameter

References

CSI

  • CSI-Provisioner (deployed as a Deployment)

    This component is used to enable automatic volume creation and automatic snapshot creation. This component also supports data restoration after data is accidentally deleted, CNFS storage, and other features.

  • CSI-Plugin (deployed as a DaemonSet)

    This component is used to enable automatic volume mounting and unmounting. Multiple storage types are supported.

The kubelet parameters required by the plug-in are different.

To run the CSI plug-in, you must set the kubelet parameter enable-controller-attach-detach to true.

Storage

FlexVolume

  • Disk-Controller (deployed as a Deployment)

    This component is used to enable automatic volume creation.

  • FlexVolume (deployed as a DaemonSet)

    This component is used to enable volume mounting and unmounting.

The kubelet parameters required by the plug-in are different.

To run the FlexVolume plug-in, you must set the kubelet parameter enable-controller-attach-detach to false.

FlexVolume overview

Scenarios

The CSI plug-in is more stable and efficient than the FlexVolume plug-in. We recommend that you upgrade from FlexVolume to CSI in the following scenarios:

  • No volume has been mounted to the cluster by using FlexVolume and no data is stored in the cluster by using FlexVolume.

  • Volumes were mounted to the cluster by using FlexVolume but the relevant data in the volumes is deleted. No data is stored in the cluster by using FlexVolume.

Step 1: Uninstall FlexVolume

  1. Log on to the OpenAPI platform and call UnInstallClusterAddons to uninstall the FlexVolume plug-in.

    • ClusterId: Set the value to the ID of your cluster. You can view the cluster ID on the Basic Information page of your cluster.

    • name: Set the value to Flexvolume.

    For more information, see Uninstall components from a cluster.

  2. Run the following command to delete the alicloud-disk-controller and alicloud-nas-controller components:

    kubectl delete deploy -n kube-system alicloud-disk-controller alicloud-nas-controller
  3. Run the following command to check whether the FlexVolume plug-in is uninstalled from your cluster:

    kubectl get pods -n kube-system | grep 'flexvolume\|alicloud-disk-controller\|alicloud-nas-controller'

    If the output is empty, the FlexVolume plug-in is uninstalled from your cluster.

  4. Run the following command to delete the StorageClass that uses FlexVolume from the cluster. The provisioner of the StorageClass that uses FlexVolume is alicloud/disk.

    kubectl delete storageclass alicloud-disk-available alicloud-disk-efficiency alicloud-disk-essd alicloud-disk-ssd

    Expected output:

    storageclass.storage.k8s.io "alicloud-disk-available" deleted
    storageclass.storage.k8s.io "alicloud-disk-efficiency" deleted
    storageclass.storage.k8s.io "alicloud-disk-essd" deleted
    storageclass.storage.k8s.io "alicloud-disk-ssd" deleted

    If the preceding output is displayed, the StorageClass is deleted from your cluster.

Step 2: Install the CSI plug-in

  • If you use the csi-compatible-controller plug-in in a cluster, a CSI plug-in is already installed in the cluster. However, this CSI plug-in is a customized CSI plug-in instead of a standard CSI plug-in. You must run the following command to delete the customized CSI plug-in before you install the standard CSI plug-in:

    kubectl delete deploy csi-provisioner -n kube-system
    kubectl delete ds csi-plugin -n kube-system
    kubectl delete csidriver diskplugin.csi.alibabacloud.com nasplugin.csi.alibabacloud.com ossplugin.csi.alibabacloud.com
    Note

    When you delete the customized CSI plug-in from the cluster, the existing pods in the cluster are not affected. However, after you delete the customized CSI plug-in from the cluster, the pods in the cluster cannot be changed until the standard CSI plug-in is installed.

  • If you do not use the csi-compatible-controller plug-in in the cluster, you can call an API operation to install the standard CSI plug-in.

  1. Log on to the OpenAPI platform and call InstallClusterAddons to install the CSI plug-in.

    • ClusterId: Set the value to the ID of your cluster.

    • name: Set the value to csi-provisioner.

    • version: The latest version is automatically specified. For more information about CSI versions, see csi-provisioner.

    For more information, see Install a component in an ACK cluster.

  2. Run the following command to check whether the CSI plug-in runs as expected in your cluster:

    kubectl get pods -n kube-system | grep csi

    Expected output:

    csi-plugin-577mm                              4/4     Running   0          3d20h
    csi-plugin-k9mzt                              4/4     Running   0          41d
    csi-provisioner-6b58f46989-8wwl5              9/9     Running   0          41d
    csi-provisioner-6b58f46989-qzh8l              9/9     Running   0          6d20h

    If the preceding output is returned, the CSI plug-in runs as expected in the cluster.

Step 3: Modify the configurations of all node pools in the cluster

The configurations of a node pool change when the volume plug-in of the cluster changes. After you install the new standard CSI plug-in, the configurations of existing node pools are not automatically updated. You must manually modify the node pool configurations to trigger an update. If the update is successful, the kubelet parameter --enable-controller-attach-detach for newly created nodes in the node pool is changed from false to true.

Important

Manually modifying the node pool configuration triggers the kubelet to restart. We recommend that you perform this operation during off-peak hours and make sure that the update on one node pool is correct before you update the other node pools.

You can modify the configuration of each node pool to update the volume plug-in by adding a new instance type for the node pool or changing the logon password of the node pool. This way, the system automatically updates the node initialization script in the background to ensure that newly added nodes use the new configurations.

Note

Alternatively, you can create a new node pool and scale in all nodes in the original node pools until the old node pools are deleted. Then, directly use the new node pool. If you use this method, you do not need to perform the following steps.

  1. Log on to the ACK console. In the navigation pane on the left, click Clusters.

  2. On the Clusters page, find the cluster to manage and click its name. In the left-side navigation pane, choose Nodes > Node Pools.

  3. On the Node Pools page, find the node pool that you want to modify and click Edit in the Actions column.

  4. In the dialog box that appears, modify the instance type configuration of the node pool, and then click Confirm.

    Note

    This modification is only to trigger the update of the volume plug-in type in the background. After the node pool configuration takes effect, you can change it back to the original configuration.

Step 4: Modify the configurations of existing nodes

Use the following YAML template to modify Kubelet parameters for CSI plug-in compatibility. This DaemonSet can change the value of the kubelet parameter --enable-controller-attach-detach of existing nodes to true. After this step is complete, the DaemonSet can be deleted.

Important

When you deploy the YAML file, kubelet is restarted. We recommend that you evaluate the impact on the applications before you deploy the YAML file.

kind: DaemonSet
apiVersion: apps/v1
metadata:
  name: kubelet-set
spec:
  selector:
    matchLabels:
      app: kubelet-set
  template:
    metadata:
      labels:
        app: kubelet-set
    spec:
      tolerations:
        - operator: "Exists"
      hostNetwork: true
      hostPID: true
      containers:
        - name: kubelet-set
          securityContext:
            privileged: true
            capabilities:
              add: ["SYS_ADMIN"]
            allowPrivilegeEscalation: true
          image: registry.cn-hangzhou.aliyuncs.com/acs/csi-plugin:v1.26.5-56d1e30-aliyun
          imagePullPolicy: "Always"
          env:
          - name: enableADController
            value: "true"
          command: ["sh", "-c"]
          args:
          - echo "Starting kubelet flag set to $enableADController";
            ifFlagTrueNum=`cat /host/etc/systemd/system/kubelet.service.d/10-kubeadm.conf | grep enable-controller-attach-detach=$enableADController | grep -v grep | wc -l`;
            echo "ifFlagTrueNum is $ifFlagTrueNum";
            if [ "$ifFlagTrueNum" = "0" ]; then
                curValue="true";
                if [ "$enableADController" = "true" ]; then
                    curValue="false";
                fi;
                sed -i "s/enable-controller-attach-detach=$curValue/enable-controller-attach-detach=$enableADController/" /host/etc/systemd/system/kubelet.service.d/10-kubeadm.conf;
                restartKubelet="true";
                echo "current value is $curValue, change to expect "$enableADController;
            fi;
            if [ "$restartKubelet" = "true" ]; then
                /nsenter --mount=/proc/1/ns/mnt systemctl daemon-reload;
                /nsenter --mount=/proc/1/ns/mnt service kubelet restart;
                echo "restart kubelet";
            fi;
            while true;
            do
                sleep 5;
            done;
          volumeMounts:
          - name: etc
            mountPath: /host/etc
      volumes:
        - name: etc
          hostPath:
            path: /etc

What to do next

After migrating FlexVolume to CSI, you can check whether the CSI plug-in runs as expected by using the CSI plug-in to create a dynamically provisioned disk volume. For more information, see Use a dynamically provisioned disk volume.