Docker container resource management
Date: Oct 25, 2022
Related Tags:1. Upgrade Docker daemon
2. Dedicated Host
Abstract: Containers, as one of the core features of Docker, are an important knowledge point that Docker users cannot avoid. If you want to understand the core principles of containers, or even write your own containers, it is absolutely impossible without a deep understanding of container resource management.
For the first question, when we start a container, it can use some system resources, which is basically the same as when we start a program on a physical machine. For example the main categories:
These system resources need to be considered and managed by us when we start the container. For example, we can execute docker run --help to see all the parameters supported by the docker run command. Now there are more than 90 parameters supported by the docker run command, so I won't list them all here.
Docker provides a very convenient command docker stats, which allows us to view and count the resources occupied by containers.
We still start a Redis container as an example.
A --no-stream parameter is passed here because the docker stats command defaults to a continuous dynamic stream output (once per second). After passing the --no-stream parameter to it, it will output only once. quit.
Next, let me introduce to you the meaning of its output:
Container ID: The ID of the container, which is also information that will not change during the life cycle of a container.
Name: The name of the container. If it is not specified manually with the --name parameter, Docker will randomly generate one, which can also be modified by commands during running.
CPU %: The percentage of CPU resources that the container is using, which involves more details, which will be described in detail below.
Mem Usage/Limit: The current memory usage and the maximum memory available to the container, here I used a 16G computer for testing.
Mem %: The percentage of memory resources that the container is using.
Net I/O: The amount of data a container sends and receives over its network interface.
Block I/O: The amount of data the container reads and writes through block devices.
Pids: The number of processes or threads created by the container.
In addition to the docker stats command mentioned above, Docker also provides another relatively simple command, docker top, which is basically the same as the ps command we usually use, and also supports the parameters of the ps command.
When we use containers, CPU and memory are the resources we are particularly concerned about. However, for the management of CPU resources, the content involved will be relatively low-level, and some involve the CPU scheduler of the kernel, such as CFS (Completely Fair Scheduler).
We can first see what parameters Docker provides to control CPU resources. Use docker run --help |grep CPU to view.
Here, we will not expand the specific meaning of the parameters in depth for the time being. We will directly illustrate with a few examples to help you understand.
Note: I am demonstrating here with a computer with a 4-core CPU.
Now let's start a container, let's take the small Alpine Linux as an example.
In another window, execute the command to view container resources described above:
It can be seen that there is no excessive CPU consumption in the current container, and the PIDS is 1, indicating that there is currently only one process.
Now let's go back to the container we just started and execute the following command:
sha256sum is a command line tool for calculating and checking SHA256 information;
/dev/zero is a special device on Linux systems that can provide infinite empty strings (NULL or 0x00 or something) when reading from it.
So the above command will make sha256sum continuously read the empty string generated by /dev/zero and perform calculations. **This will quickly consume CPU resources.
Let's take a look at the resource usage of the container at this time:
You can see that the current CPU usage is already around 100%.
Let's open a new window, enter the container, and execute the same command:
Check the resource usage of the container:
You can see that the two processes are now running at full load on the two CPUs. It should be noted here that sha256sum is chosen as an example because it is a single-threaded program, and starting a sha256sum each time will not consume the resources of other CPU cores.
It can be concluded that if the program in the container is not limited in CPU resources, it may consume a lot of CPU resources, thereby affecting other programs or affecting the stability of the system.
Next, we limit the CPU resources of this container, such as limiting it to only 0.5 CPU.
We can restart a container and add resource limits to it when docker run .
But let me show you a way to dynamically change resource limits, using the docker update command. For example, in this example, we use the following command to limit the container to 0.5 CPU.
For convenience, we directly close the sha256sum process just now, and press Ctrl+c to terminate the process. Then rerun the command:
Check resource usage:
As you can see, the process is using around 50% of the CPU. We next start another sha256sum process:
Next, repeat the above steps, but assign it 1.5 CPU to see how it actually looks.
Using the previous two windows respectively, execute sha256sum /dev/zero to test:
View resource usage:
It can be seen that the results are basically in line with our expectations, about 150% of the CPU, and the two test processes almost equally share the CPU resources.
You can use --cpuset-cpus to specify the available CPU cores, here I specify 0, which means use the first CPU core.
Using the previous two windows respectively, execute sha256sum /dev/zero to test:
Check out the resources:
It can be seen that although we still use --cpus to specify 1.5 CPU, because the use of --cpuset-cpus limit only allows it to run on the first CPU, so these two test processes can only score the CPU . This article is excerpted from the column.

We have described how to manage the container's CPU resources. Next, let's take a look at how to manage the container's memory resources. Compared with CPU resources, the management of memory resources is much simpler.
Similarly, let's first see what parameters are available for us to configure, and I will introduce their meaning later:
Before starting the content of container memory management, we might as well talk about a very common problem that we have to face: OOM (Out Of Memory).
When the kernel detects that there is not enough memory to run some functions of the system, it will trigger an OOM exception, and will use the OOM Killer to kill some processes and make room to ensure the normal operation of the system.
Here is a brief introduction to the general execution process of OOM killer, so that everyone can understand the subsequent content.
The code of OOM Killer in the kernel can be seen directly in torvalds/linux/mm/oom_kill.c, here is Linux Kernel 5.2 as an example.
Quoting one of the notes:
If we run out of memory, we have the choice between either killing a random task (bad), letting the system crash (worse).
OR try to be smart about which process to kill. Note that we don't have to be perfect here, we just have to be good.
The translation is that when we are in OOM, we have several options, kill arbitrary tasks randomly (not good), crash the system (worse) or try to understand which process can be killed. Note that we don't need to strive for perfection here, we just need to be good.
In fact it does, killing arbitrary processes at random or crashing the system is not what we want.
Back in the kernel code, when the system memory is insufficient, out_of_memory() is triggered, and then the select_bad_process() function is called to select a bad process to kill.
So what kind of process is a bad process? There are always conditions. select_bad_process() is a simple loop that calls oom_evaluate_task() to perform conditional calculations on the process. The core judgment logic is oom_badness().
In order to make the selection as quickly as possible, the logic here is as simple as possible. In addition to clearly marking the processes that cannot be killed, directly select the process that occupies the most memory. (of course, there is an additional oom_score_adj that can be used to control the weights)
The two main benefits of this choice are:
A lot of memory can be reclaimed;
After OOM is alleviated, subsequent preemption of memory by the process can cause subsequent OOM.
Let's turn our attention back to Docker itself. In a production environment, we usually use Docker to start multiple containers to run services. When encountering OOM, if the Docker process is killed, it will also have a great impact on our service.
So Docker sets a -500 oom_score_adj by default at startup to avoid the Docker process itself being killed by OOM Killer as much as possible.
If we want to prevent a container from being killed by the OOM Killer as much as possible, we can pass it --oom-score-adj to configure a lower value.
But be careful: do not disable the OOM Killer with --oom-kill-disable, or set the container to a lower oom_score_adj value than the dockerd process, which may cause system instability in some cases. Unless you clearly know the impact of your actions.
Manage container memory resources
After introducing OOM, let's continue to introduce how to manage the memory resources of containers compared to what you already know about the harm caused by memory exhaustion.
The available configuration parameters are the above, we usually use the --memory parameter directly to limit the memory size available to the container. We also use a few examples to introduce:
Start a container and pass the parameter --memory 10m to limit the memory it can use to 10 m.
So how do we verify it's free memory size? On a physical machine, we usually use the free tool to view it. But does it still work in a container environment?
Obviously, the result of using free is information on the host. Of course, we have already introduced the docker stats command, we use it to check to see the current resource usage:
You can see that the information in the MEM USAGE / LIMIT column has taken effect, as we expected.
So is there any other way we can see this information? Of course there are:
Or you can execute the following command on the host machine:
Note: The above commands have been tested under the Linux 5.2 kernel, and the directory structure is slightly different between different versions.
Update container memory resource limits
When the container runs for a period of time, the memory used by the processes in it increases. We want to allow the container to use more memory resources. How to do it?
We can still do it with the docker update command described earlier.
For example, use the following command to expand the available memory to 20m:
If that's not enough, what about expanding to 100m?
You will find an error message here. The general idea is that the memory limit should be smaller than the configured memoryswap limit, and memoryswap needs to be updated at the same time.
You may be confused, we only limited the memory to 10m before, and it was successful when expanding to 20m. Why is there an error when it reaches 100m?
This involves the specific behavior of these parameters, and I will introduce them to you.
Specific behavior of memory limit parameters
The specific parameter behavior here mainly refers to the --memory and --memory-swap parameters we used earlier.
1. --memory is used to limit memory usage, while --memory-swap means the sum of memory and swap.
This explains the above "Memory limit should be smaller than already set memoryswap limit", because --memory-swap should always be greater than or equal to --memory (after all, the minimum Swap can only be 0 ).
2. If only --memory is specified, the final --memory-swap will be set to twice --memory. That is, in the case where only --memory is passed, the container can only use Swap of the same size as --memory .
This also explains the above "when it is directly expanded to 20m, it can be successful, but when it is expanded to 100m, it will make an error". In the above scenario, only --memory is specified as 10m, so --memory-swap is set by default to 20m.
3. If --memory-swap and --memory are set to the same value, it means that Swap is not used.
4. If --memory-swap is set to -1, it means that there is no limit to the Swap used by the container.
5. If the --memory-swap parameter is set, the --memory parameter must be set.
So far, I have introduced the core content of container resource management, including managing the container's CPU resources and memory resources. Reasonable resource control for containers is conducive to improving the stability of the overall environment, avoiding situations such as resource preemption or a large amount of memory occupation resulting in OOM and process being killed.
When managing the CPU, it is recommended to use --cpus, the semantics will be clearer. If you are familiar with the Linux CPU scheduler CFS and have strong customization requirements, it is more appropriate to use --cpu-period and --cpu-quota in this case.
When managing memory, there is a --memory-swappiness parameter that needs attention. It can be set to a percentage of 0~100, which is basically the same as the swappiness behavior we usually see. Setting it to 0 means that anonymous page swapping is not used. A setting of 100 means that anonymous pages can be exchanged. If not specified, it will be inherited from the host by default.
Through the above example, I introduced how to limit the CPU resources that the container can use through the --cpus parameter; through the --cpuset-cpus parameter, you can specify the CPU cores used for running processes in the container; through docker update, you can directly update a running Container configuration.
Now let's go back and use docker run --help | grep CPU to see the options Docker supports for container CPU-related parameters:
--cpus was added in Docker 1.13 and can be used to replace the old --cpu-period and --cpu-quota. These three parameters will ultimately affect the scheduling result of the process by the CPU scheduler CFS (Completely Fair Scheduler, completely fair scheduling algorithm) of the Linux kernel through cgroups.
In general, it is recommended to use --cpus directly without setting --cpu-period and --cpu-quota separately, unless you already have enough knowledge about the CPU scheduler CFS, providing --cpus parameter is also the Docker team In order to simplify the user's use of increased costs, it is sufficient to meet most of our needs.
The --cpu-shares option, although it has some practical significance, is not as intuitive as --cpus, and it will be affected by the running state of the current system. In order not to cause trouble to everyone, here is the No more introductions.
Two parameters --cpu-rt-period and --cpu-rt-runtime will affect the real-time scheduler of the CPU. However, the real-time scheduler needs the support of kernel parameters, and configuring the real-time scheduler is also an advanced or dangerous operation, which may cause various strange problems, and will not be expanded here.
Related Tags:1. Upgrade Docker daemon
2. Dedicated Host
Abstract: Containers, as one of the core features of Docker, are an important knowledge point that Docker users cannot avoid. If you want to understand the core principles of containers, or even write your own containers, it is absolutely impossible without a deep understanding of container resource management.
Resource Type
For the first question, when we start a container, it can use some system resources, which is basically the same as when we start a program on a physical machine. For example the main categories:
These system resources need to be considered and managed by us when we start the container. For example, we can execute docker run --help to see all the parameters supported by the docker run command. Now there are more than 90 parameters supported by the docker run command, so I won't list them all here.
View container occupied resources
docker stats
Docker provides a very convenient command docker stats, which allows us to view and count the resources occupied by containers.
We still start a Redis container as an example.
A --no-stream parameter is passed here because the docker stats command defaults to a continuous dynamic stream output (once per second). After passing the --no-stream parameter to it, it will output only once. quit.
Next, let me introduce to you the meaning of its output:
Container ID: The ID of the container, which is also information that will not change during the life cycle of a container.
Name: The name of the container. If it is not specified manually with the --name parameter, Docker will randomly generate one, which can also be modified by commands during running.
CPU %: The percentage of CPU resources that the container is using, which involves more details, which will be described in detail below.
Mem Usage/Limit: The current memory usage and the maximum memory available to the container, here I used a 16G computer for testing.
Mem %: The percentage of memory resources that the container is using.
Net I/O: The amount of data a container sends and receives over its network interface.
Block I/O: The amount of data the container reads and writes through block devices.
Pids: The number of processes or threads created by the container.
docker top
In addition to the docker stats command mentioned above, Docker also provides another relatively simple command, docker top, which is basically the same as the ps command we usually use, and also supports the parameters of the ps command.
Manage CPU resources for containers
When we use containers, CPU and memory are the resources we are particularly concerned about. However, for the management of CPU resources, the content involved will be relatively low-level, and some involve the CPU scheduler of the kernel, such as CFS (Completely Fair Scheduler).
We can first see what parameters Docker provides to control CPU resources. Use docker run --help |grep CPU to view.
Here, we will not expand the specific meaning of the parameters in depth for the time being. We will directly illustrate with a few examples to help you understand.
Unlimited by default
Note: I am demonstrating here with a computer with a 4-core CPU.
Now let's start a container, let's take the small Alpine Linux as an example.
In another window, execute the command to view container resources described above:
It can be seen that there is no excessive CPU consumption in the current container, and the PIDS is 1, indicating that there is currently only one process.
Now let's go back to the container we just started and execute the following command:
sha256sum is a command line tool for calculating and checking SHA256 information;
/dev/zero is a special device on Linux systems that can provide infinite empty strings (NULL or 0x00 or something) when reading from it.
So the above command will make sha256sum continuously read the empty string generated by /dev/zero and perform calculations. **This will quickly consume CPU resources.
Let's take a look at the resource usage of the container at this time:
You can see that the current CPU usage is already around 100%.
Let's open a new window, enter the container, and execute the same command:
Check the resource usage of the container:
You can see that the two processes are now running at full load on the two CPUs. It should be noted here that sha256sum is chosen as an example because it is a single-threaded program, and starting a sha256sum each time will not consume the resources of other CPU cores.
It can be concluded that if the program in the container is not limited in CPU resources, it may consume a lot of CPU resources, thereby affecting other programs or affecting the stability of the system.
Allocate 0.5 CPU
Next, we limit the CPU resources of this container, such as limiting it to only 0.5 CPU.
We can restart a container and add resource limits to it when docker run .
But let me show you a way to dynamically change resource limits, using the docker update command. For example, in this example, we use the following command to limit the container to 0.5 CPU.
For convenience, we directly close the sha256sum process just now, and press Ctrl+c to terminate the process. Then rerun the command:
Check resource usage:
As you can see, the process is using around 50% of the CPU. We next start another sha256sum process:
Allocate 1.5 CPUs
Next, repeat the above steps, but assign it 1.5 CPU to see how it actually looks.
Using the previous two windows respectively, execute sha256sum /dev/zero to test:
View resource usage:
It can be seen that the results are basically in line with our expectations, about 150% of the CPU, and the two test processes almost equally share the CPU resources.
Specify available CPU cores
You can use --cpuset-cpus to specify the available CPU cores, here I specify 0, which means use the first CPU core.
Using the previous two windows respectively, execute sha256sum /dev/zero to test:
Check out the resources:
It can be seen that although we still use --cpus to specify 1.5 CPU, because the use of --cpuset-cpus limit only allows it to run on the first CPU, so these two test processes can only score the CPU . This article is excerpted from the column.

Manage container memory resources
We have described how to manage the container's CPU resources. Next, let's take a look at how to manage the container's memory resources. Compared with CPU resources, the management of memory resources is much simpler.
Similarly, let's first see what parameters are available for us to configure, and I will introduce their meaning later:
OOM
Before starting the content of container memory management, we might as well talk about a very common problem that we have to face: OOM (Out Of Memory).
When the kernel detects that there is not enough memory to run some functions of the system, it will trigger an OOM exception, and will use the OOM Killer to kill some processes and make room to ensure the normal operation of the system.
Here is a brief introduction to the general execution process of OOM killer, so that everyone can understand the subsequent content.
The code of OOM Killer in the kernel can be seen directly in torvalds/linux/mm/oom_kill.c, here is Linux Kernel 5.2 as an example.
Quoting one of the notes:
If we run out of memory, we have the choice between either killing a random task (bad), letting the system crash (worse).
OR try to be smart about which process to kill. Note that we don't have to be perfect here, we just have to be good.
The translation is that when we are in OOM, we have several options, kill arbitrary tasks randomly (not good), crash the system (worse) or try to understand which process can be killed. Note that we don't need to strive for perfection here, we just need to be good.
In fact it does, killing arbitrary processes at random or crashing the system is not what we want.
Back in the kernel code, when the system memory is insufficient, out_of_memory() is triggered, and then the select_bad_process() function is called to select a bad process to kill.
So what kind of process is a bad process? There are always conditions. select_bad_process() is a simple loop that calls oom_evaluate_task() to perform conditional calculations on the process. The core judgment logic is oom_badness().
In order to make the selection as quickly as possible, the logic here is as simple as possible. In addition to clearly marking the processes that cannot be killed, directly select the process that occupies the most memory. (of course, there is an additional oom_score_adj that can be used to control the weights)
The two main benefits of this choice are:
A lot of memory can be reclaimed;
After OOM is alleviated, subsequent preemption of memory by the process can cause subsequent OOM.
Let's turn our attention back to Docker itself. In a production environment, we usually use Docker to start multiple containers to run services. When encountering OOM, if the Docker process is killed, it will also have a great impact on our service.
So Docker sets a -500 oom_score_adj by default at startup to avoid the Docker process itself being killed by OOM Killer as much as possible.
If we want to prevent a container from being killed by the OOM Killer as much as possible, we can pass it --oom-score-adj to configure a lower value.
But be careful: do not disable the OOM Killer with --oom-kill-disable, or set the container to a lower oom_score_adj value than the dockerd process, which may cause system instability in some cases. Unless you clearly know the impact of your actions.
Manage container memory resources
After introducing OOM, let's continue to introduce how to manage the memory resources of containers compared to what you already know about the harm caused by memory exhaustion.
The available configuration parameters are the above, we usually use the --memory parameter directly to limit the memory size available to the container. We also use a few examples to introduce:
Start a container and pass the parameter --memory 10m to limit the memory it can use to 10 m.
So how do we verify it's free memory size? On a physical machine, we usually use the free tool to view it. But does it still work in a container environment?
Obviously, the result of using free is information on the host. Of course, we have already introduced the docker stats command, we use it to check to see the current resource usage:
You can see that the information in the MEM USAGE / LIMIT column has taken effect, as we expected.
So is there any other way we can see this information? Of course there are:
Or you can execute the following command on the host machine:
Note: The above commands have been tested under the Linux 5.2 kernel, and the directory structure is slightly different between different versions.
Update container memory resource limits
When the container runs for a period of time, the memory used by the processes in it increases. We want to allow the container to use more memory resources. How to do it?
We can still do it with the docker update command described earlier.
For example, use the following command to expand the available memory to 20m:
If that's not enough, what about expanding to 100m?
You will find an error message here. The general idea is that the memory limit should be smaller than the configured memoryswap limit, and memoryswap needs to be updated at the same time.
You may be confused, we only limited the memory to 10m before, and it was successful when expanding to 20m. Why is there an error when it reaches 100m?
This involves the specific behavior of these parameters, and I will introduce them to you.
Specific behavior of memory limit parameters
The specific parameter behavior here mainly refers to the --memory and --memory-swap parameters we used earlier.
1. --memory is used to limit memory usage, while --memory-swap means the sum of memory and swap.
This explains the above "Memory limit should be smaller than already set memoryswap limit", because --memory-swap should always be greater than or equal to --memory (after all, the minimum Swap can only be 0 ).
2. If only --memory is specified, the final --memory-swap will be set to twice --memory. That is, in the case where only --memory is passed, the container can only use Swap of the same size as --memory .
This also explains the above "when it is directly expanded to 20m, it can be successful, but when it is expanded to 100m, it will make an error". In the above scenario, only --memory is specified as 10m, so --memory-swap is set by default to 20m.
3. If --memory-swap and --memory are set to the same value, it means that Swap is not used.
4. If --memory-swap is set to -1, it means that there is no limit to the Swap used by the container.
5. If the --memory-swap parameter is set, the --memory parameter must be set.
So far, I have introduced the core content of container resource management, including managing the container's CPU resources and memory resources. Reasonable resource control for containers is conducive to improving the stability of the overall environment, avoiding situations such as resource preemption or a large amount of memory occupation resulting in OOM and process being killed.
When managing the CPU, it is recommended to use --cpus, the semantics will be clearer. If you are familiar with the Linux CPU scheduler CFS and have strong customization requirements, it is more appropriate to use --cpu-period and --cpu-quota in this case.
When managing memory, there is a --memory-swappiness parameter that needs attention. It can be set to a percentage of 0~100, which is basically the same as the swappiness behavior we usually see. Setting it to 0 means that anonymous page swapping is not used. A setting of 100 means that anonymous pages can be exchanged. If not specified, it will be inherited from the host by default.
summary
Through the above example, I introduced how to limit the CPU resources that the container can use through the --cpus parameter; through the --cpuset-cpus parameter, you can specify the CPU cores used for running processes in the container; through docker update, you can directly update a running Container configuration.
Now let's go back and use docker run --help | grep CPU to see the options Docker supports for container CPU-related parameters:
--cpus was added in Docker 1.13 and can be used to replace the old --cpu-period and --cpu-quota. These three parameters will ultimately affect the scheduling result of the process by the CPU scheduler CFS (Completely Fair Scheduler, completely fair scheduling algorithm) of the Linux kernel through cgroups.
In general, it is recommended to use --cpus directly without setting --cpu-period and --cpu-quota separately, unless you already have enough knowledge about the CPU scheduler CFS, providing --cpus parameter is also the Docker team In order to simplify the user's use of increased costs, it is sufficient to meet most of our needs.
The --cpu-shares option, although it has some practical significance, is not as intuitive as --cpus, and it will be affected by the running state of the current system. In order not to cause trouble to everyone, here is the No more introductions.
Two parameters --cpu-rt-period and --cpu-rt-runtime will affect the real-time scheduler of the CPU. However, the real-time scheduler needs the support of kernel parameters, and configuring the real-time scheduler is also an advanced or dangerous operation, which may cause various strange problems, and will not be expanded here.
Related Articles
-
A detailed explanation of Hadoop core architecture HDFS
Knowledge Base Team
-
What Does IOT Mean
Knowledge Base Team
-
6 Optional Technologies for Data Storage
Knowledge Base Team
-
What Is Blockchain Technology
Knowledge Base Team
Explore More Special Offers
-
Short Message Service(SMS) & Mail Service
50,000 email package starts as low as USD 1.99, 120 short messages start at only USD 1.00