You can manually initialize local storage resources in a Kubernetes cluster or use Ansible to initialize the local storage resources in batches. The process of initializing local storage resources is complex, especially in large clusters. The node-resource-manager component can automatically initialize and update local storage resources on a node based on a ConfigMap. This topic describes how to use node-resource-manager to automatically initialize local storage resources on nodes in a Kubernetes cluster.
Background information
The process of initializing local storage resources in a Kubernetes cluster is a challenge. You can manually initialize local storage resources or use Ansible to initialize the local storage resources in batches. However, the following issues exist:
It is complex to perform a custom deployment by using Ansible based on Kubernetes node information. You must first install kubelet on the node and run
shell commands. Then, you need to manually parse the command output.In clusters that contain a large number of nodes, it is difficult to manually initialize local storage resources.
Initialized storage resources cannot be automatically maintained for long-term business. You need to manually log on to the node and update resources. In addition, the usage information of initialized storage resources is not reported to the Kubernetes control plane, which affects pod scheduling.
The node-resource-manager component can automatically initialize local storage resources and report the usage information of local storage resources to the Kubernetes control plane. This way, the scheduler can allocate local storage resources to pods based on the reported usage information. For more information, see node-resource-manager.
Step 1: Create a ConfigMap to specify the nodes on which you want to initialize local storage resources
You can configure the following parameters in a ConfigMap to specify the nodes on which you want to initialize local storage resources:
key: kubernetes.io/hostname
operator: In
value: xxxxxParameter | Description |
| The |
| The operator that is used in the label selector. Valid values:
|
| The |
Use Logical Volume Manager (LVM) or QuotaPath to define the local storage resources on the node.
Use LVM to define the resource topology
You can use one of the following methods to define the resource topology:
type: device: node-resource-manager claims avolume groupthat is provisioned based on the block storage devices specified in thedevicesparameter. Thevolumegroupis named based on thenameparameter. When an application that requests a logical volume (LV) is started, the LV can be allocated based on thevolumegroup.type: alibabacloud-local-disk: node-resource-manager creates avolumegroupbased on all local disks of the host. Thevolumegroupis named based on thenameparameter. To use this method, you must deploy the host on an Elastic Compute Service (ECS) instance that is equipped with local disks.ImportantBlock storage devices that are manually attached to ECS instances of the i2 instance family with local SSDs are cloud disks and are not considered local disks.
type: pmem: node-resource-manager creates avolumegroupbased on the persistent memory (PMEM) resources on the host. Thevolumegroupis named based on thenameparameter. You can configure theregionsparameter to specify theregionsto which the PMEM resources belong.
Define the resource topology based on the following YAML template:
apiVersion: v1
kind: ConfigMap
metadata:
name: unified-resource-topo
namespace: kube-system
data:
volumegroup: |-
volumegroup:
- name: volumegroup1
key: kubernetes.io/hostname
operator: In
value: cn-zhangjiakou.192.168.XX.XX
topology:
type: device
devices:
- /dev/vdb
- /dev/vdc
- name: volumegroup2
key: kubernetes.io/nodetype
operator: NotIn
value: localdisk
topology:
type: alibabacloud-local-disk
- name: volumegroup3
key: kubernetes.io/hostname
operator: Exists
value: cn-beijing.192.168.XX.XX
topology:
type: pmem
regions:
- region0Use QuotaPath to define the resource topology
You can use one of the following methods to define the resource topology:
type: device: node-resource-manager initializes QuotaPath volumes based on the block storage devices on the host. The QuotaPath volume is mounted to the path specified in thenameparameter.type: pmem: node-resource-manager initializes QuotaPath volumes based on the PMEM resources on the host. The QuotaPath volume is mounted to the path that is specified in thenameparameter.
Define the resource topology based on the following YAML template:
apiVersion: v1
kind: ConfigMap
metadata:
name: unified-resource-topo
namespace: kube-system
data:
quotapath: |-
quotapath:
- name: /mnt/path1
key: kubernetes.io/hostname
operator: In
value: cn-beijing.192.168.XX.XX
topology:
type: device
options: prjquota
fstype: ext4
devices:
- /dev/vdb
- name: /mnt/path2
key: kubernetes.io/hostname
operator: In
value: cn-beijing.192.168.XX.XX
topology:
type: pmem
options: prjquota,shared
fstype: ext4
regions:
- region0The following table describes the parameters.
Parameter | Description |
| The options for mounting block storage devices. |
| The file system type that is used to format the block storage devices. Default value: |
| The block storage devices to be mounted. If you specify multiple block storage devices, the devices are mounted in sequence. |
Step 2: Deploy node-resource-manager
Use the following YAML template to create a DaemonSet and deploy node-resource-manager in your cluster:
cat <<EOF | kubectl apply -f -
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: node-resource-manager
namespace: kube-system
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: node-resource-manager
rules:
- apiGroups: [""]
resources: ["configmaps"]
verbs: ["get", "watch", "list", "delete", "update", "create"]
- apiGroups: [""]
resources: ["nodes"]
verbs: ["get", "list", "watch"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: node-resource-manager-binding
subjects:
- kind: ServiceAccount
name: node-resource-manager
namespace: kube-system
roleRef:
kind: ClusterRole
name: node-resource-manager
apiGroup: rbac.authorization.k8s.io
---
kind: DaemonSet
apiVersion: apps/v1
metadata:
name: node-resource-manager
namespace: kube-system
spec:
selector:
matchLabels:
app: node-resource-manager
template:
metadata:
labels:
app: node-resource-manager
spec:
tolerations:
- operator: "Exists"
priorityClassName: system-node-critical
serviceAccountName: node-resource-manager
hostNetwork: true
hostPID: true
containers:
- name: node-resource-manager
securityContext:
privileged: true
capabilities:
add: ["SYS_ADMIN"]
allowPrivilegeEscalation: true
image: registry.cn-hangzhou.aliyuncs.com/acs/node-resource-manager:v1.18.8.0-983ce56-aliyun
imagePullPolicy: "Always"
args:
- "--nodeid=$(KUBE_NODE_NAME)"
env:
- name: KUBE_NODE_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: spec.nodeName
volumeMounts:
- mountPath: /dev
mountPropagation: "HostToContainer"
name: host-dev
- mountPath: /var/log/
name: host-log
- name: etc
mountPath: /host/etc
- name: config
mountPath: /etc/unified-config
volumes:
- name: host-dev
hostPath:
path: /dev
- name: host-log
hostPath:
path: /var/log/
- name: etc
hostPath:
path: /etc
- name: config
configMap:
name: node-resource-topo
EOFAfter node-resource-manager is deployed, node-resource-manager automatically initializes local storage resources on nodes based on the configurations in the ConfigMap that you created. If you update the ConfigMap, node-resource-manager updates the initialized local storage resources within 1 minute after the update is completed.
To ensure data security, node-resource-manager does not delete resources from your cluster.