Logical Volume Manager (LVM) virtualizes disks by creating volume groups (VGs) and logical volumes (LVs). You can mount LVs of local disks to pods. This topic describes how to use LVs.

Background information

You can mount hostPath volumes and local volumes to pods. This allows pods to access the storage of host nodes. However hostPash volumes and local volumes have the following restrictions:
  • Kubernetes does not manage the lifecycle of hostPath volumes and local volumes. You have to manually manage the lifecycle.
  • When a local storage is mounted to multiple pods, these pods share the storage capacity under the same directory or corresponding subdirectories. Storage isolation is not supported among these pods.
  • When a local storage is mounted to multiple pods, the input/output operations per second (IOPS) and throughput of each pod equals those of the entire storage medium. You cannot configure different IOPS and throughput settings.
  • When a local storage is mounted to a pod, the available storage space of each node is unknown and the pod may be mounted with a local storage with insufficient storage.

Alibaba Cloud Container Service for Kubernetes (ACK) uses LVM to provide a solution to fix the preceding issues.

Features

  • Lifecycle management: automatic creation, deletion, isolation, mounting, and unmounting of LVs.
  • Expansions of LVs.
  • Monitoring of LVs.
  • Restrictions on the IOPS of LVs.
  • Automatic operations and maintenance (O&M) of LVs.
  • Monitoring of the storage space of LVs.

Considerations

  • LVs are not applicable to scenarios where the high availability of data must be guaranteed.
  • The O&M of VGs and the monitoring of the storage space of LVs are currently not supported and will soon be available.

Deploy the CIS plug-in for LVs

The CSI plug-in for LVs consists of Plugin and Provisioner. Plugin is used to mount and delete LVs. Provisioner is used to create LVs and persistent volumes (PVs).

apiVersion: storage.k8s.io/v1beta1
kind: CSIDriver
metadata:
  name: localplugin.csi.alibabacloud.com
spec:
  attachRequired: false
  podInfoOnMount: true
---
kind: DaemonSet
apiVersion: apps/v1
metadata:
  name: csi-local-plugin
  namespace: kube-system
spec:
  selector:
    matchLabels:
      app: csi-local-plugin
  template:
    metadata:
      labels:
        app: csi-local-plugin
    spec:
      tolerations:
        - operator: Exists
      serviceAccount: admin
      priorityClassName: system-node-critical
      hostNetwork: true
      hostPID: true
      containers:
        - name: driver-registrar
          image: registry.cn-hangzhou.aliyuncs.com/acs/csi-node-driver-registrar:v1.1.0
          imagePullPolicy: Always
          args:
            - "--v=5"
            - "--csi-address=/csi/csi.sock"
            - "--kubelet-registration-path=/var/lib/kubelet/csi-plugins/localplugin.csi.alibabacloud.com/csi.sock"
          env:
            - name: KUBE_NODE_NAME
              valueFrom:
                fieldRef:
                  apiVersion: v1
                  fieldPath: spec.nodeName
          volumeMounts:
            - name: plugin-dir
              mountPath: /csi
            - name: registration-dir
              mountPath: /registration

        - name: csi-localplugin
          securityContext:
            privileged: true
            capabilities:
              add: ["SYS_ADMIN"]
            allowPrivilegeEscalation: true
          image: registry.cn-hangzhou.aliyuncs.com/acs/csi-plugin:v1.14.8.41-bce68b74-aliyun
          imagePullPolicy: "Always"
          args :
            - "--endpoint=$(CSI_ENDPOINT)"
            - "--v=5"
            - "--nodeid=$(KUBE_NODE_NAME)"
            - "--driver=localplugin.csi.alibabacloud.com"
          env:
            - name: KUBE_NODE_NAME
              valueFrom:
                fieldRef:
                  apiVersion: v1
                  fieldPath: spec.nodeName
            - name: DRIVER_VENDOR
              value: localplugin.csi.alibabacloud.com
            - name: CSI_ENDPOINT
              value: unix://var/lib/kubelet/csi-plugins/localplugin.csi.alibabacloud.com/csi.sock
          volumeMounts:
            - name: pods-mount-dir
              mountPath: /var/lib/kubelet
              mountPropagation: "Bidirectional"
            - mountPath: /dev
              mountPropagation: "HostToContainer"
              name: host-dev
            - mountPath: /var/log/
              name: host-log
      volumes:
        - name: plugin-dir
          hostPath:
            path: /var/lib/kubelet/csi-plugins/localplugin.csi.alibabacloud.com
            type: DirectoryOrCreate
        - name: registration-dir
          hostPath:
            path: /var/lib/kubelet/plugins_registry
            type: DirectoryOrCreate
        - name: pods-mount-dir
          hostPath:
            path: /var/lib/kubelet
            type: Directory
        - name: host-dev
          hostPath:
            path: /dev
        - name: host-log
          hostPath:
            path: /var/log/
  updateStrategy:
    rollingUpdate:
      maxUnavailable: 10%
    type: RollingUpdate

kind: Deployment
apiVersion: apps/v1
metadata:
  name: csi-local-provisioner
  namespace: kube-system
spec:
  selector:
    matchLabels:
      app: csi-local-provisioner
  replicas: 2
  template:
    metadata:
      labels:
        app: csi-local-provisioner
    spec:
      tolerations:
      - operator: "Exists"
      affinity:
        nodeAffinity:
          preferredDuringSchedulingIgnoredDuringExecution:
          - weight: 1
            preference:
              matchExpressions:
              - key: node-role.kubernetes.io/master
                operator: Exists
      priorityClassName: system-node-critical
      serviceAccount: admin
      hostNetwork: true
      containers:
        - name: external-local-provisioner
          image: registry.cn-hangzhou.aliyuncs.com/acs/csi-provisioner:v1.6.0-b6f763a43-ack
          args:
            - "--csi-address=$(ADDRESS)"
            - "--feature-gates=Topology=True"
            - "--volume-name-prefix=lvm"
            - "--strict-topology=true"
            - "--timeout=150s"
            - "--extra-create-metadata=true"
            - "--enable-leader-election=true"
            - "--leader-election-type=leases"
            - "--retry-interval-start=500ms"
            - "--v=5"
          env:
            - name: ADDRESS
              value: /socketDir/csi.sock
          imagePullPolicy: "Always"
          volumeMounts:
            - name: socket-dir
              mountPath: /socketDir
        - name: external-local-resizer
          image: registry.cn-hangzhou.aliyuncs.com/acs/csi-resizer:v0.3.0
          args:
            - "--v=5"
            - "--csi-address=$(ADDRESS)"
            - "--leader-election"
          env:
            - name: ADDRESS
              value: /socketDir/csi.sock
          imagePullPolicy: "Always"
          volumeMounts:
            - name: socket-dir
              mountPath: /socketDir/
      volumes:
        - name: socket-dir
          hostPath:
            path: /var/lib/kubelet/csi-plugins/localplugin.csi.alibabacloud.com
            type: DirectoryOrCreate

Use LVs

When you use CSI-Provisioner to create PVs, note the following limits:
  • You must specify the name of the VG in a StorageClass.
  • If you create a PV on a specified node, you must add the volume.kubernetes.io/selected-node: nodeName label to the corresponding persistent volume claim (PVC).
  1. Use the following template to create a StorageClass:
    apiVersion: storage.k8s.io/v1
    kind: StorageClass
    metadata:
        name: csi-local
    provisioner: localplugin.csi.alibabacloud.com
    parameters:
        volumeType: LVM
        vgName: volumegroup1
        fsType: ext4
        lvmType: "striping"
    reclaimPolicy: Delete
    volumeBindingMode: WaitForFirstConsumer
    allowVolumeExpansion: true
    Parameter Description
    volumeType The type of the volume. The volume type must be LV. Other types of volumes will soon be supported.
    vgName The name of the VG. This parameter is required.
    lvmType The type of the LV. Valid values: linear and striping.
    fsType The type of the file system.
  2. Use the following template to create a PVC:
    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
      name: lvm-pvc
    spec:
      accessModes:
      - ReadWriteOnce
      resources:
        requests:
          storage: 2Gi
      storageClassName: csi-local
  3. Use the following template to create an application:
    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: deployment-lvm
      labels:
        app: nginx
    spec:
      selector:
        matchLabels:
          app: nginx
      template:
        metadata:
          labels:
            app: nginx
        spec:
          containers:
          - name: nginx
            image: nginx:1.7.9
            volumeMounts:
              - name: lvm-pvc
                mountPath: "/data"
          volumes:
            - name: lvm-pvc
              persistentVolumeClaim:
                claimName: lvm-pvc
  4. Query the status of the application.
    ## Query the status of the pod and PV.
    kubectl get pod
    # Output:
    NAME                             READY   STATUS    RESTARTS   AGE
    deployment-lvm-9f798687c-mqfht   1/1     Running   0          9s
    
    kubectl get pvc
    # Output:
    NAME      STATUS   VOLUME                                      CAPACITY   ACCESS MODES   STORAGECLASS   AGE
    lvm-pvc   Bound    disk-afacf7a9-3d1a-45da-b443-24f8fb3599c1   2Gi        RWO            csi-local      16s
    
    kubectl get pv
    # Output:
    NAME                                        CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM             STORAGECLASS   REASON   AGE
    disk-afacf7a9-3d1a-45da-b443-24f8fb3599c1   2Gi        RWO            Delete           Bound    default/lvm-pvc   csi-local               12s
    
    ## Mount the LV to the pod and create a file in the LV to test the mount.
    kubectl exec -ti deployment-lvm-9f798687c-mqfht sh
    df /data
    # Output:
    Filesystem                                                              1K-blocks  Used Available Use% Mounted on
    /dev/mapper/volumegroup1-disk--afacf7a9--3d1a--45da--b443--24f8fb3599c1   1998672  6144   1976144   1% /data
    
    ls /data
    # Output:
    lost+found
    
    touch /data/test
    ls /data
    # Output:
    lost+found  test
    
    exit
    
    ## Delete and redeploy the pod to test whether the file still exists.
    kubectl delete pod deployment-lvm-9f798687c-mqfht
    # Output:
    pod "deployment-lvm-9f798687c-mqfht" deleted
    
    kubectl get pod
    # Output:
    NAME                             READY   STATUS    RESTARTS   AGE
    deployment-lvm-9f798687c-jsdnk   1/1     Running   0          2m19s
    
    kubectl exec deployment-lvm-9f798687c-jsdnk ls /data
    # Output:
    lost+found
    test
  5. Expand the LV.
    ## Modify the PVC to expand the LV from 2 GiB to 4 GiB.
    kubectl get pvc
    # Output:
    NAME      STATUS   VOLUME                                      CAPACITY   ACCESS MODES   STORAGECLASS   AGE
    lvm-pvc   Bound    disk-afacf7a9-3d1a-45da-b443-24f8fb3599c1   2Gi        RWO            csi-local      6m50s
    
    ## Run the expansion command:
    kubectl patch pvc lvm-pvc -p '{"spec":{"resources":{"requests":{"storage":"4Gi"}}}}'
    # Output:
    persistentvolumeclaim/lvm-pvc patched
    
    ## It takes several seconds or tens of seconds to expand a LV. Run the following command to query the PVC:
    kubectl get pvc
    # Output:
    NAME      STATUS   VOLUME                                      CAPACITY   ACCESS MODES   STORAGECLASS   AGE
    lvm-pvc   Bound    disk-afacf7a9-3d1a-45da-b443-24f8fb3599c1   4Gi        RWO            csi-local      7m26s
    
    ## The output indicates that the LV is expanded from 2 GiB to 4 GiB:
    kubectl exec deployment-lvm-9f798687c-jsdnk df /data
    # Output:
    Filesystem                                                              1K-blocks  Used Available Use% Mounted on
    /dev/mapper/volumegroup1-disk--afacf7a9--3d1a--45da--b443--24f8fb3599c1   4062912  8184   4038344   1% /data
  6. Run the following command to monitor the LV.
    Log on to the node where the pod is deployed:
    curl -s localhost:10255/metrics | grep lvm-pvc
    kubelet_volume_stats_available_bytes{namespace="default",persistentvolumeclaim="lvm-pvc"} 1.917165568e+09
    kubelet_volume_stats_capacity_bytes{namespace="default",persistentvolumeclaim="lvm-pvc"} 1.939816448e+09
    kubelet_volume_stats_inodes{namespace="default",persistentvolumeclaim="lvm-pvc"} 122400
    kubelet_volume_stats_inodes_free{namespace="default",persistentvolumeclaim="lvm-pvc"} 122389
    kubelet_volume_stats_inodes_used{namespace="default",persistentvolumeclaim="lvm-pvc"} 11
    kubelet_volume_stats_used_bytes{namespace="default",persistentvolumeclaim="lvm-pvc"} 5.873664e+06

    The preceding monitoring data can be imported to the Prometheus Monitoring service in the Cloud Monitor console. For more information, see Deploy Prometheus to a Kubernetes cluster.