This topic provides answers to some frequently asked questions about nodes and node pools.
How do I manually update the kernel of GPU-accelerated nodes in a cluster?
How do I resolve the issue that no container is started on a GPU-accelerated node?
How do I fix the "drain-node job execute timeout" error that occurs when I remove a node?
How do I change the hostname of a worker node in an ACK cluster?
How do I change the maximum number of pods supported by a node?
How do I manually update the kernel version of GPU-accelerated nodes in a cluster?
To manually update the kernel version of GPU-accelerated nodes in a cluster, perform the following steps:
The current kernel version is earlier than 3.10.0-957.21.3
.
Confirm the kernel version to which you want to update. Proceed with caution when you perform the update.
The following procedure shows how to update the NVIDIA driver. Details about how to update the kernel version are not shown.
Obtain the kubeconfig file of a cluster and use kubectl to connect to the cluster. For more information, see Obtain the kubeconfig file of a cluster and use kubectl to connect to the cluster.
Set the GPU-accelerated node that you want to manage to the unschedulable state. In this example, the node cn-beijing.i-2ze19qyi8votgjz12345 is used.
kubectl cordon cn-beijing.i-2ze19qyi8votgjz12345 node/cn-beijing.i-2ze19qyi8votgjz12345 already cordoned
Migrate the pods on the GPU-accelerated node to other nodes.
kubectl drain cn-beijing.i-2ze19qyi8votgjz12345 --grace-period=120 --ignore-daemonsets=true node/cn-beijing.i-2ze19qyi8votgjz12345 cordoned WARNING: Ignoring DaemonSet-managed pods: flexvolume-9scb4, kube-flannel-ds-r2qmh, kube-proxy-worker-l62sf, logtail-ds-f9vbg pod/nginx-ingress-controller-78d847fb96-5fkkw evicted
Uninstall the existing nvidia-driver.
NoteIn this example, the uninstalled driver version is 384.111. If your driver version is not 384.111, download the installation package of your driver from the official NVIDIA website and update the driver to
384.111
first.Log on to the GPU-accelerated node and run the
nvidia-smi
command to check the driver version.sudo nvidia-smi -a | grep 'Driver Version' Driver Version : 384.111
Download the driver installation package.
sudo cd /tmp/ sudo curl -O https://cn.download.nvidia.cn/tesla/384.111/NVIDIA-Linux-x86_64-384.111.run
NoteThe installation package is required for uninstalling the NVIDIA driver.
Uninstall the driver.
sudo chmod u+x NVIDIA-Linux-x86_64-384.111.run sudo sh ./NVIDIA-Linux-x86_64-384.111.run --uninstall -a -s -q
Update the kernel.
Update the kernel version based on your business requirements.
Restart the GPU-accelerated node.
sudo reboot
Log on to the GPU node and run the following command to install the kernel-devel package.
sudo yum install -y kernel-devel-$(uname -r)
Go to the official NVIDIA website to download the required driver and install it on the GPU-accelerated node. In this example, the driver version 410.79 is used.
sudo cd /tmp/ sudo curl -O https://cn.download.nvidia.cn/tesla/410.79/NVIDIA-Linux-x86_64-410.79.run sudo chmod u+x NVIDIA-Linux-x86_64-410.79.run sudo sh ./NVIDIA-Linux-x86_64-410.79.run -a -s -q warm up GPU sudo nvidia-smi -pm 1 || true sudo nvidia-smi -acp 0 || true sudo nvidia-smi --auto-boost-default=0 || true sudo nvidia-smi --auto-boost-permission=0 || true sudo nvidia-modprobe -u -c=0 -m || true
Make sure that the /etc/rc.d/rc.local file includes the following configurations. Otherwise, add the following configurations to the file.
sudo nvidia-smi -pm 1 || true sudo nvidia-smi -acp 0 || true sudo nvidia-smi --auto-boost-default=0 || true sudo nvidia-smi --auto-boost-permission=0 || true sudo nvidia-modprobe -u -c=0 -m || true
Restart kubelet and Docker.
sudo service kubelet stop sudo service docker restart sudo service kubelet start
Set the GPU-accelerated node to schedulable.
kubectl uncordon cn-beijing.i-2ze19qyi8votgjz12345 node/cn-beijing.i-2ze19qyi8votgjz12345 already uncordoned
Run the following command in the nvidia-device-plugin container to check the version of the driver installed on the GPU-accelerated node.
kubectl exec -n kube-system -t nvidia-device-plugin-cn-beijing.i-2ze19qyi8votgjz12345 nvidia-smi Thu Jan 17 00:33:27 2019 +-----------------------------------------------------------------------------+ | NVIDIA-SMI 410.79 Driver Version: 410.79 CUDA Version: N/A | |-------------------------------+----------------------+----------------------+ | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | |===============================+======================+======================| | 0 Tesla P100-PCIE... On | 00000000:00:09.0 Off | 0 | | N/A 27C P0 28W / 250W | 0MiB / 16280MiB | 0% Default | +-------------------------------+----------------------+----------------------+ +-----------------------------------------------------------------------------+ | Processes: GPU Memory | | GPU PID Type Process name Usage | |=============================================================================| | No running processes found | +-----------------------------------------------------------------------------+
NoteIf no container is launched on the GPU-accelerated node after you run the
docker ps
command, see What do I do if no container is launched on a GPU-accelerated node?
What do I do if no container is launched on a GPU-accelerated node?
For specific Kubernetes versions, after you restart kubelet and Docker on GPU-accelerated nodes, no container is launched on the nodes.
sudo service kubelet stop
Redirecting to /bin/systemctl stop kubelet.service
sudo service docker stop
Redirecting to /bin/systemctl stop docker.service
sudo service docker start
Redirecting to /bin/systemctl start docker.service
sudo service kubelet start
Redirecting to /bin/systemctl start kubelet.service
sudo docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
Run the following command to check the cgroup driver:
sudo docker info | grep -i cgroup
Cgroup Driver: cgroupfs
The output shows that the cgroup driver is set to cgroupfs.
To resolve the issue, perform the following steps:
Create a copy of /etc/docker/daemon.json. Then, run the following commands to update /etc/docker/daemon.json.
sudo cat >/etc/docker/daemon.json <<-EOF { "default-runtime": "nvidia", "runtimes": { "nvidia": { "path": "/usr/bin/nvidia-container-runtime", "runtimeArgs": [] } }, "exec-opts": ["native.cgroupdriver=systemd"], "log-driver": "json-file", "log-opts": { "max-size": "100m", "max-file": "10" }, "oom-score-adjust": -1000, "storage-driver": "overlay2", "storage-opts":["overlay2.override_kernel_check=true"], "live-restore": true } EOF
Run the following commands to restart the Docker runtime and kubelet:
sudo service kubelet stop Redirecting to /bin/systemctl stop kubelet.service sudo service docker restart Redirecting to /bin/systemctl restart docker.service sudo service kubelet start Redirecting to /bin/systemctl start kubelet.service
Run the following command to check whether the cgroup driver is set to systemd.
sudo docker info | grep -i cgroup Cgroup Driver: systemd
How do I change the hostname of a worker node in an ACK cluster?
After a Container Service for Kubernetes (ACK) cluster is created, you cannot directly change the hostnames of worker nodes. If you want to change the hostname of a worker node, modify the node naming rule of the relevant node pool, remove the worker node from the node pool, and then add the worker node to the node pool again.
When you create an ACK cluster, you can modify the hostnames of worker nodes in the Custom Node Name section. For more information, see Create an ACK managed cluster.
Remove the worker node.
Log on to the ACK console.
In the left-side navigation pane of the ACK console, click Clusters.
In the left-side navigation pane of the details page, choose .
On the Nodes page, find the worker node that you want to remove and choose in the Actions column.
In the dialog box that appears, select I understand the above information and want to remove the node(s). and click OK.
Add the worker node to the node pool again. For more information, see Manually add ECS instances.
Then, the worker node is renamed based on the new node naming rule of the node pool.
How do I change the operating system for a node pool?
The method used to change the operating system for a node pool is similar to that used to update a node pool. To change the operating system for a node pool, perform the following steps:
Log on to the ACK console. In the left-side navigation pane, click Clusters.
On the Clusters page, click the name of the cluster that you want to manage and choose in the left-side navigation pane.
On the Node Pools page, find the node pool that you want to modify and choose
in the Actions column.Select Change Operating System, select the image that is used to replace the original image, and then click Start Update.
NoteBy default, Kubelet Update and Upgrade Node Pool by Replacing System Disk are selected when you change the operating system for a node pool. Select Create Snapshot before Update based on your business requirements.
What are the differences between node pools that are configured with the Expected Nodes parameter and those that are not configured with this parameter?
The Expected Nodes parameter specifies the number of nodes that you want to keep in a node pool. You can change the value of this parameter to adjust the number of nodes in the node pool. This feature is disabled for existing node pools that are not configured with the Expected Nodes parameter.
Node pools that are configured with the Expected Nodes parameter and those that are not configured with this parameter have different reactions to operations such as removing nodes and releasing ECS instances. The following table shows the details.
Operation | Node pool that is configured with the Expected Nodes parameter | Node pool that is not configured with the Expected Nodes parameter | Suggestion |
Remove specified nodes in the ACK console or by calling the ACK API. | The value of the Expected Nodes parameter automatically changes based on the number of nodes that you removed. For example, the value of the Expected Nodes parameter is 10 before you remove nodes. After you remove three nodes, the value is changed to 7. | The specified nodes are removed as expected. | To scale in a node pool, we recommend that you use this method. |
Remove nodes by running the | The value of the Expected Nodes parameter remains unchanged. | The nodes are not removed. | We recommend that you do not use this method to remove nodes. |
Manually release ECS instances in the ECS console or by calling the ECS API. | New ECS instances are automatically added to the node pool to keep the expected number of nodes. | The node pool does not respond to the operation. No ECS instances are added to the node pool. After the subscriptions of ECS instances expire, the nodes remain in the Unknown state before they are removed from the Nodes list of the node pool details page in the ACK console. | This operation may cause an inconsistency among the ACK console, Auto Scaling console, and the actual condition. We recommend that you do not use this method to remove nodes. To remove nodes, we recommend that you use the ACK console or call the ACK API. For more information, see Remove a node. |
The subscriptions of ECS instances expire. | New ECS instances are automatically added to the node pool to keep the expected number of nodes. | The node pool does not respond to the operation. No ECS instances are added to the node pool. After the subscriptions of ECS instances expire, the nodes remain in the Unknown state before they are removed from the Nodes list of the node pool details page in the ACK console. | This operation may cause an inconsistency among the ACK console, Auto Scaling console, and the actual condition. We recommend that you do not use this method to remove nodes. To remove nodes, we recommend that you use the ACK console or call the ACK API. For more information, see Remove a node. |
Manually enable the health check feature of Auto Scaling for ECS instances in a scaling group and the ECS instances fail to pass health checks due to reasons such as that the ECS instances are suspended. | New ECS instances are automatically added to the node pool to keep the expected number of nodes. | New ECS instances are automatically added to replace the ECS instances that are suspended. | We recommend that you do not perform operations on the scaling group of a node pool. |
Remove ECS instances from scaling groups in the Auto Scaling console without changing the value of the Expected Nodes parameter. | New ECS instances are automatically added to the node pool to keep the expected number of nodes. | No ECS instances are added to the node pool. | We recommend that you do not perform operations on the scaling group of a node pool. |
How do I add existing nodes to a cluster?
If you want to add existing nodes to a cluster that contains no node pool, create a node pool that contains 0 nodes in cluster. Then, add the existing ECS instances to the node pool. When you create the node pool, select the vSwitches that are used by the existing ECS instances that you want to add and set the Expected Nodes parameter to 0. For more information about how to manually add existing ECS instances to a cluster, see Add existing ECS instances to an ACK cluster.
Each node pool corresponds to a scaling group. No fees are charged for node pools. However, you are charged for the cloud resources that are used by node pools, such as ECS instances.
How do I use preemptible instances in a node pool?
You can use preemptible instances when you create a node pool. You can also use preemptible instances in a node pool by using the spot-instance-advisor
command-line tool. For more information, see Best practices for preemptible instance-based node pools.
When you create a cluster, you cannot select preemptible instances for the node pool of the cluster.
How do I change the maximum number of pods supported by a node?
The maximum number of pods supported by a node is limited and varies based on the type of cluster. You can increase the upper limit for nodes in certain cluster types. For more information, see Quotas.
The network plug-in used by an ACK cluster also has limits on the maximum number of pods supported by a node. You can go to the Basic Information tab of the ACK cluster in the ACK console to check the network plug-in used by the cluster.
If your cluster uses Flannel, the maximum number of pods supported by a node cannot be changed after the cluster is created. If you require more pods, you can scale out node pools, or recreate the cluster and then reconfigure the pod CIDR block. For more information about scaling out node pools, see Scale a node pool. For more information about how to create an ACK cluster, see Create an ACK managed cluster.
If your cluster uses Terway, you can change the instance type to increase the maximum number of pods supported by a node. For more information, see Overview of instance configuration changes.
NoteAfter you change the instance type, you need to set the nodes to the Unschedulable state, drain and restart the nodes, and then initiate pod scheduling. For more information, see Set node schedulability.
For more information about the maximum number of elastic network interfaces (ENIs) supported by each ECS instance type and the maximum number of private IP addresses supported by an ENI, see Overview of instance families.