After you use volume groups (VGs) to virtualize disks, you can use Logical Volume Manager (LVM) to divide the VGs into logical volumes (LVs) and mount the LVs to pods. This topic describes how to use LVs.
Background Information
To enable pods to use the storage of the node, you can use hostPath volumes or local volumes. However, hostPath volumes and local volumes have the following limits. To help you address the following limits, you can use LVs in Container Service for Kubernetes (ACK) clusters. The following list describes the limits of hostPath volumes and local volumes:
Kubernetes does not manage the lifecycle of hostPath volumes and local volumes. You must manually manage and maintain the volumes.
When multiple pods use the same local storage, these pods share the same directory or each pod uses a subdirectory. As a result, storage isolation cannot be implemented among these pods.
When multiple pods use the same local storage, the input/output operations per second (IOPS) and throughput of each pod equal those of the entire storage. You cannot limit the IOPS and throughput of each pod.
When you create a pod that uses the local storage, the available storage on each node is unknown. As a result, the volumes mounted to the nodes cannot be scheduled properly.
Introduction
Lifecycle management of LVs: automatic creation, deletion, mounting, and unmounting.
Expansion of LVs.
Monitoring of LVs.
IOPS limiting of LVs.
Automatic operations and maintenance of VGs. This enables you to manage the local storage of nodes.
Storage usage monitoring for clusters that use LVs.
Usage notes
LVs cannot be migrated. Therefore, LVs are not suitable for high availability scenarios.
Storage usage monitoring for clusters that use LVs an optional feature. To initialize VGs, you must manually initialize local storage resources or automatically initialize local storage resources. Both methods require you to have knowledge of local storage resources. Otherwise, we recommend that you use cloud storage resources such as disks and file systems managed by Container Network File System (CNFS).
Architecture
Basic features of LVM, such as LV lifecycle management, expansion, mounting, and formatting, are implemented by CSI-Provisioner and CSI-Plugin.
Component | Description |
Storage manager | This component is used to manage local storage resources on the node and monitor storage usage. You can also use the worker to manage and maintain VGs. |
Custom Resource Definition | This component is used to save local storage information of the node, such as the storage capacity and VGs. |
LV scheduler | This component is used to create persistent volume claims (PVCs) and monitors the storage capacity of the cluster. |
Step 1: Grant CSI-Plugin and CSI-Provisioner the RBAC permissions to manage Secrets
Kubernetes 1.22 and later
Use the following YAML content to create a ServiceAccount and grant permissions to the ServiceAccount:
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: alibaba-cloud-csi-local
namespace: kube-system
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: alibaba-cloud-csi-local
rules:
- apiGroups: [""]
resources: ["persistentvolumes"]
verbs: ["get", "list", "watch", "update", "create", "delete", "patch"]
- apiGroups: [""]
resources: ["persistentvolumeclaims"]
verbs: ["get", "list", "watch", "update"]
- apiGroups: [""]
resources: ["persistentvolumeclaims/status"]
verbs: ["get", "list", "watch", "update", "patch"]
- apiGroups: ["storage.k8s.io"]
resources: ["storageclasses"]
verbs: ["get", "list", "watch"]
- apiGroups: ["storage.k8s.io"]
resources: ["csinodes"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["events"]
verbs: ["get", "list", "watch", "create", "update", "patch"]
- apiGroups: [""]
resources: ["nodes"]
verbs: ["get", "update", "patch", "list", "watch"]
- apiGroups: ["storage.k8s.io"]
resources: ["volumeattachments"]
verbs: ["get", "list", "watch", "update", "patch"]
- apiGroups: ["coordination.k8s.io"]
resources: ["leases"]
verbs: ["get", "create", "list", "watch", "delete", "update"]
- apiGroups: ["storage.k8s.io"]
resources: ["volumeattachments/status"]
verbs: ["patch"]
- apiGroups: ["storage.k8s.io"]
resources: ["storageclasses"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["pods"]
verbs: ["get", "list", "watch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: alibaba-cloud-csi-local
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: alibaba-cloud-csi-local
subjects:
- kind: ServiceAccount
name: alibaba-cloud-csi-local
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: alibaba-cloud-csi-local
namespace: kube-system
rules:
- apiGroups: [""]
resources: ["secrets"]
resourceNames: ["csi-local-plugin-cert"]
verbs: ["get"]
- apiGroups: [""]
resources: ["secrets"]
verbs: ["create"]
- apiGroups: [""]
resources: ["configmaps"]
resourceNames: ["csi-plugin", "ack-cluster-profile"]
verbs: ["get"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: alibaba-cloud-csi-local
namespace: kube-system
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: alibaba-cloud-csi-local
subjects:
- kind: ServiceAccount
name: alibaba-cloud-csi-local
namespace: kube-systemKubernetes 1.20 and earlier
T oshare the same ServiceAccount with CSI-Plugin, you must grant clusterrole/alicloud-csi-plugin the role-based access control (RBAC) permissions to manage Secrets.
Run the following command to check whether
clusterrole/alicloud-csi-pluginhas the permissions tocreateSecrets:echo `JSONPATH='{range .rules[*]}{@.resources}:{@.verbs} \r\n {end}' \ && kubectl get clusterrole alicloud-csi-plugin -o jsonpath="$JSONPATH";` | grep secretsExpected output:
["secrets"]:["get","list"]If
clusterrole/alicloud-csi-plugindoes not have the permissions tocreateSecrets, run the following command to grant the permissions:kubectl patch clusterrole alicloud-csi-plugin --type='json' -p='[{"op": "add", "path": "/rules/0", "value":{ "apiGroups": [""], "resources": ["secrets"], "verbs": ["create"]}}]'Expected output:
clusterrole.rbac.authorization.k8s.io/alicloud-csi-plugin patchedRun the following command to check whether
clusterrole/alicloud-csi-pluginhas the permissions tocreateSecrets:echo `JSONPATH='{range .rules[*]}{@.resources}:{@.verbs} \r\n {end}' \ && kubectl get clusterrole alicloud-csi-plugin -o jsonpath="$JSONPATH";` | grep secretsExpected output:
["secrets"]:["create"] ["secrets"]:["get","list"]The output indicates that
clusterrole/alicloud-csi-pluginhas the permissions tocreateSecrets.
Step 2: Deploy CSI-Plugin and CSI-Provisioner
CSI components for LVs consist of CSI-Plugin and CSI-Provisioner. CSI-Plugin is used to mount and unmount LVs. CSI-Provisioner is used to create LVs and persistent volumes (PVs).
Replace {{ regionId }} in the following YAML content with the region ID of your cluster.
Kubernetes 1.22 and later
Kubernetes 1.20 and earlier
Step 3: Use LVs
When you use CSI-Provisioner to create PVs, take note of the following limits:
You must specify the name of the VG in a StorageClass.
If you want to create a PV on a specified node, you must add the volume.kubernetes.io/selected-node: nodeName annotation to the related PVC.
Use the following YAML template to create a StorageClass:
apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: csi-local provisioner: localplugin.csi.alibabacloud.com parameters: volumeType: LVM vgName: volumegroup1 fsType: ext4 lvmType: "striping" writeIOPS: "10000" writeBPS: "1M" reclaimPolicy: Delete volumeBindingMode: WaitForFirstConsumer allowVolumeExpansion: trueParameter
Description
volumeType
The type of volume. A value of LVM indicates LV.
vgName
The name of the VG. This parameter is required.
fsType
The type of file system.
lvmType
The type of LV. Valid values: linear and striping.
writeIOPS
The write IOPS of an LV that is created by using the StorageClass.
writeBPS
The size of data that can be written per second into an LV that is created by using the StorageClass. Unit: bytes.
Use the following YAML template to create a PVC:
apiVersion: v1 kind: PersistentVolumeClaim metadata: name: lvm-pvc spec: accessModes: - ReadWriteOnce resources: requests: storage: 2Gi storageClassName: csi-localUse the following YAML template to create an application:
apiVersion: apps/v1 kind: Deployment metadata: name: deployment-lvm labels: app: nginx spec: selector: matchLabels: app: nginx template: metadata: labels: app: nginx spec: containers: - name: nginx image: nginx:1.7.9 volumeMounts: - name: lvm-pvc mountPath: "/data" volumes: - name: lvm-pvc persistentVolumeClaim: claimName: lvm-pvcQuery information about the PVC and persistent volume (PV) used by the application.
Run the following command to query information about the PVC.
kubectl get pvcNAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE lvm-pvc Bound disk-afacf7a9-3d1a-45da-b443-24f8fb35**** 2Gi RWO csi-local 16sRun the following command to query information about the PV.
kubectl get pvExpected output:
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE disk-afacf7a9-3d1a-45da-b443-24f8fb35**** 2Gi RWO Delete Bound default/lvm-pvc csi-local 12s
Query the status of the application.
Run the following command to query the information about the pods created for the application:
kubectl get podExpected output:
NAME READY STATUS RESTARTS AGE deployment-lvm-9f798687c-m**** 1/1 Running 0 9sRun the following command to query the volume that is mounted to the pod:
kubectl exec -ti deployment-lvm-9f798687c-m**** sh -- df /dataExpected output:
Filesystem 1K-blocks Used Available Use% Mounted on /dev/mapper/volumegroup1-disk--afacf7a9--3d1a--45da--b443--24f8fb35**** 1998672 6144 1976144 1% /dataRun the following command to query all directories in the
/datadirectory:ls /dataExpected output:
lost+found
Test whether LVs can be used to persist data.
Run the following command to create a directory named test in the
/datadirectory:touch /data/test ls /dataExpected output:
lost+found testRun the following command to exit:
exitRun the following command to delete the pod:
kubectl delete pod deployment-lvm-9f798687c-m****Expected output:
pod "deployment-lvm-9f798687c-m****" deletedRun the following command to query information about the pods created for the application:
kubectl get podExpected output:
NAME READY STATUS RESTARTS AGE deployment-lvm-9f798687c-j**** 1/1 Running 0 2m19sRun the following command to query the volume that is mounted to the pod:
kubectl exec deployment-lvm-9f798687c-j**** -- ls /dataExpected output:
lost+found test
Expand the LV.
Run the following command to query information about the PVC:
kubectl get pvcExpected output:
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE lvm-pvc Bound disk-afacf7a9-3d1a-45da-b443-24f8fb35**** 2Gi RWO csi-local 6m50sRun the following command to expand the capacity requested by the PVC to 4 GiB:
kubectl patch pvc lvm-pvc -p '{"spec":{"resources":{"requests":{"storage":"4Gi"}}}}'Expected output:
persistentvolumeclaim/lvm-pvc patchedRun the following command to query information about the PVC:
kubectl get pvcExpected output:
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE lvm-pvc Bound disk-afacf7a9-3d1a-45da-b443-24f8fb35**** 4Gi RWO csi-local 7m26sRun the following command to check whether the capacity of the LV is expanded to 4 GiB:
kubectl exec deployment-lvm-9f798687c-j**** -- df /dataExpected output:
Filesystem 1K-blocks Used Available Use% Mounted on /dev/mapper/volumegroup1-disk--afacf7a9--3d1a--45da--b443--24f8fb35**** 4062912 8184 4038344 1% /data
Monitor the LV.
curl -s localhost:10255/metrics | grep lvm-pvcExpected output:
kubelet_volume_stats_available_bytes{namespace="default",persistentvolumeclaim="lvm-pvc"} 1.917165568e+09 kubelet_volume_stats_capacity_bytes{namespace="default",persistentvolumeclaim="lvm-pvc"} 1.939816448e+09 kubelet_volume_stats_inodes{namespace="default",persistentvolumeclaim="lvm-pvc"} 122400 kubelet_volume_stats_inodes_free{namespace="default",persistentvolumeclaim="lvm-pvc"} 122389 kubelet_volume_stats_inodes_used{namespace="default",persistentvolumeclaim="lvm-pvc"} 11 kubelet_volume_stats_used_bytes{namespace="default",persistentvolumeclaim="lvm-pvc"} 5.873664e+06The preceding monitoring data can be imported to Prometheus and displayed in the console. For more information, see Use open source Prometheus to monitor an ACK cluster.