FlexVolume is deprecated in Container Service for Kubernetes (ACK) and is no longer supported in newly created ACK clusters. We recommend that you upgrade FlexVolume in existing ACK clusters to Container Storage Interface (CSI). To upgrade from FlexVolume to CSI, you can uninstall FlexVolume and then install the CSI plug-in, modify node pool configurations, or modify the configurations of existing nodes. This topic describes how to upgrade from FlexVolume to CSI for clusters where no data is stored.

Differences between CSI and FlexVolume

The following table describes the differences between CSI and FlexVolume.

Plug-inComponentKubelet parameterReferences
CSI
  • CSI-Provisioner (deployed as a Deployment)

    This component is used to enable automatic volume creation and automatic snapshot creation. This component is used to enable Container Network File System (CNFS) storage and data restoration after accidental deletions.

  • CSI-Plugin (deployed as a DaemonSet)

    This component is used to enable automatic volume mounting and unmounting. By default, this component supports disk volumes, Apsara File Storage NAS (NAS) volumes, and Object Storage Service (OSS) volumes.

The kubelet parameters required by the CSI plug-in are different from those of the FlexVolume plug-in.

To run the CSI plug-in, you must set enable-controller-attach-detach to true for the kubelet on each node.

CSI overview
Flexvolume
  • Disk-Controller (deployed as a Deployment)

    This component is used to enable automatic volume creation.

  • FlexVolume (deployed as a DaemonSet)

    This component is used to enable volume mounting and unmounting. By default, this component supports disk volumes, Apsara File Storage NAS (NAS) volumes, and Object Storage Service (OSS) volumes.

The kubelet parameters required by the FlexVolume plug-in are different from those of the CSI plug-in.

To run the FlexVolume plug-in, you must set enable-controller-attach-detach to false for the kubelet on each node.

FlexVolume overview

Scenarios

The CSI plug-in is more stable and efficient than the FlexVolume plug-in. In the following scenarios, we recommend that you upgrade from FlexVolume to CSI:

  • No volume has been mounted to the cluster by using FlexVolume and no data is stored in the cluster by using FlexVolume.
  • Volumes were mounted to the cluster by using FlexVolume but the relevant data in the volumes is deleted. No data is stored in the cluster by using FlexVolume.

Step 1: Uninstall FlexVolume

  1. Log on to the OpenAPI Explorer platform and call the UnInstallClusterAddons operation to uninstall the FlexVolume plug-in.
    • ClusterId: Set the value to the ID of your cluster. You can view the ID of your cluster on the Basic Information page.
    • name: Set the value to Flexvolume.
    Fore more information, see UnInstallClusterAddons.
  2. Run the following command to delete the alicloud-disk-controller and alicloud-nas-controller components:
    kubectl delete deploy -nkube-system alicloud-disk-controller alicloud-nas-controller
  3. Run the following command to check whether the FlexVolume plug-in is uninstalled from your cluster:
    kubectl get pods -n kube-system | grep 'flexvolume\|alicloud-disk-controller\|alicloud-nas-controller'

    If the output is empty, the FlexVolume plug-in is uninstalled from your cluster.

Step 2: Install the CSI plug-in

  1. Log on to the OpenAPI Explorer platform and call the InstallClusterAddons operation to install the CSI plug-in.
    • ClusterId: Set the value to the ID of your cluster.
    • name: Set the value to csi-provisioner.
    • version: The latest version is automatically specified. For more information about CSI versions, see csi-provisioner.
    Fore more information, see Install a component in an ACK cluster.
  2. Run the following command to check whether the CSI plug-in runs as expected in the cluster:
    kubectl get pods -nkube-system | grep csi

    Expected output:

    csi-plugin-577mm                              4/4     Running   0          3d20h
    csi-plugin-k9mzt                              4/4     Running   0          41d
    csi-provisioner-6b58f46989-8wwl5              9/9     Running   0          41d
    csi-provisioner-6b58f46989-qzh8l              9/9     Running   0          6d20h

    If the preceding output is returned, the CSI plug-in runs as expected in the cluster.

Step 3: Modify the configurations of all node pools in the cluster

You can modify the configuration of each node pool to update the volume plug-in by adding a new instance type for the node pool or changing the logon password of the node pool. This way, the system automatically updates the node initialization script in the background to ensure that newly added nodes use the new configurations.

  1. Log on to the ACK console and click Clusters in the left-side navigation pane.
  2. On the Clusters page, click the name of a cluster and choose Nodes > Node Pools in the left-side navigation pane.
  3. On the Node Pools page, find the node pool that you want to manage and click Edit in the Actions column.
  4. On the node pool details page, use one of the following methods to modify the configurations of the node pool:
    • Add a new instance type.

      After you add a node, the system automatically updates the volume plug-in of the cluster in the background. You can delete the node after the new node pool configurations take effect.

    • Change the logon password of the node pool.
  5. Click Confirm.

Step 4: Modify the configurations of existing nodes

Create a YAML based on the following code block. Then, deploy the YAML file to modify the kubelet parameters on which the CSI plug-in relies.

Important When you deploy the YAML file, kubelet is restarted. We recommend that you evaluate the impact on the applications before you deploy the YAML file.
kind: DaemonSet
apiVersion: apps/v1
metadata:
  name: kubelet-set
spec:
  selector:
    matchLabels:
      app: kubelet-set
  template:
    metadata:
      labels:
        app: kubelet-set
    spec:
      tolerations:
        - operator: "Exists"
      hostNetwork: true
      hostPID: true
      containers:
        - name: kubelet-set
          securityContext:
            privileged: true
            capabilities:
              add: ["SYS_ADMIN"]
            allowPrivilegeEscalation: true
          image: registry.cn-hangzhou.aliyuncs.com/acs/csi-plugin:v1.16.9.43-f36bb540-aliyun
          imagePullPolicy: "Always"
          env:
          - name: enableADController
            value: "true"
          command: ["sh", "-c"]
          args:
          - echo "Starting kubelet flag set to $enableADController";
            ifFlagTrueNum=`cat /host/etc/systemd/system/kubelet.service.d/10-kubeadm.conf | grep enable-controller-attach-detach=$enableADController | grep -v grep | wc -l`;
            echo "ifFlagTrueNum is $ifFlagTrueNum";
            if [ "$ifFlagTrueNum" = "0" ]; then
                curValue="true";
                if [ "$enableADController" = "true" ]; then
                    curValue="false";
                fi;
                sed -i "s/enable-controller-attach-detach=$curValue/enable-controller-attach-detach=$enableADController/" /host/etc/systemd/system/kubelet.service.d/10-kubeadm.conf;
                restartKubelet="true";
                echo "current value is $curValue, change to expect "$enableADController;
            fi;
            if [ "$restartKubelet" = "true" ]; then
                /nsenter --mount=/proc/1/ns/mnt systemctl daemon-reload;
                /nsenter --mount=/proc/1/ns/mnt service kubelet restart;
                echo "restart kubelet";
            fi;
            while true;
            do
                sleep 5;
            done;
          volumeMounts:
          - name: etc
            mountPath: /host/etc
      volumes:
        - name: etc
          hostPath:
            path: /etc

What to do next

After you upgrade FlexVolume to CSI, you can check whether the CSI plug-in runs as expected by using the CSI plug-in to create a dynamically provisioned disk volume. Fore more information, see Use a dynamically provisioned disk volume by using kubectl.