×
Community Blog Docker Container Resource Management: CPU, RAM and IO: Part 1

Docker Container Resource Management: CPU, RAM and IO: Part 1

This tutorial gives you practical experience of using Docker container resource limitation functionalities on Alibaba Cloud ECS.

By Alwyn Botha, Alibaba Cloud Tech Share Author. Tech Share is Alibaba Cloud's incentive program to encourage the sharing of technical knowledge and best practices within the cloud community.

This tutorial aims to give you practical experience of using Docker container resource limitation functionalities on an Alibaba Cloud Elastic Compute Service (ECS) instance, including:

  • CPU quotas
  • RAM quotas
  • IO bandwidth quotas

Prerequisites

You need access to an ECS server with a recent version of Docker already installed. If you don't have one already, you can follow the steps in this tutorial.

These resource limit tests use 20 - 30 MB of RAM so even a server with only a total RAM of 512MB will do.

The CPU tests are done on a server with only 2 cores. You will get more interesting results - for one of the tests - if your server has 4 cores or more.

Some of the CPU tests hog all CPUs for 15 seconds. It would be great for your teammates if you did this tutorial directly on your computer and not on the shared development server.

I am writing this tutorial using CentOS. You can use Debian / Ubuntu. 99% of this tutorial will work on any Linux distro since it mostly uses Docker commands.

You need a very basic understanding of Docker, images, containers and using docker run and docker ps -a.

Clean Up Preparation

It will really help if you have only a few ( preferably no ) containers running. That way you can easily find your tutorial container in docker ps -a output lists.

So stop and prune all the containers you do not need running.

You can quickly do that ( in your DEVELOPMENT environment ) using:

docker stop $(docker ps -a -q) #stop ALL containers

To now remove all containers, run

docker rm -f $(docker ps -a -q) # remove ALL containers

––memory-reservation

From https://docs.docker.com/config/containers/resource_constraints/

Allows you to specify a soft limit smaller than --memory which is activated when Docker detects contention or low memory on the host machine. If you use --memory-reservation, it must be set lower than --memory for it to take precedence. Because it is a soft limit, it does not guarantee that the container doesn't exceed the limit.

I am running this on a 1 GB RAM server.

Let's run 5 containers each reserving 250 MB of RAM.

docker container run -d --memory-reservation=250m --name mymem1 alpine:3.8 sleep 3600
docker container run -d --memory-reservation=250m --name mymem2 alpine:3.8 sleep 3602
docker container run -d --memory-reservation=250m --name mymem3 alpine:3.8 sleep 3603
docker container run -d --memory-reservation=250m --name mymem4 alpine:3.8 sleep 3604
docker container run -d --memory-reservation=250m --name mymem5 alpine:3.8 sleep 3605

All containers are running even though I over-reserved RAM by 250 MB. So this is hopeless: reservations that does not reserve; and does not prevent over-reservations.

If you run top you will see no virtual RAM allocated. This setting is internal to Docker.

  PID USER        VIRT    RES    SHR S %MEM     TIME+ COMMAND
  933 root      967.4m  86.0m  24.3m S  8.7   0:55.87 dockerd
  940 root      582.0m  36.3m  12.3m S  3.7   0:46.50 docker-containe
13422 root        8.7m   3.3m   2.5m S  0.3   0:00.02 docker-containe
13309 root        7.3m   3.0m   2.3m S  0.3   0:00.02 docker-containe
13676 root        7.3m   2.9m   2.2m S  0.3   0:00.01 docker-containe
13540 root        7.3m   2.8m   2.1m S  0.3   0:00.01 docker-containe
13793 root        8.7m   2.7m   2.1m S  0.3           docker-containe

docker stats does not show RAM reservations.

CONTAINER ID        NAME                CPU %               MEM USAGE / LIMIT     MEM %               NET I/O             BLOCK I/O           PIDS
a1a4bd1c226b        mymem5              0.00%               1.086MiB / 985.2MiB   0.11%               578B / 0B           1.19MB / 0B         0
9ced89c63a7e        mymem4              0.00%               1.105MiB / 985.2MiB   0.11%               648B / 0B           1.19MB / 0B         0
696f1cef7d57        mymem3              0.00%               1.113MiB / 985.2MiB   0.11%               648B / 0B           1.19MB / 0B         0
77d61012b5fd        mymem2              0.00%               1.086MiB / 985.2MiB   0.11%               648B / 0B           1.19MB / 0B         0
fab3faa6d23d        mymem1              0.00%               1.043MiB / 985.2MiB   0.11%               648B / 0B           1.19MB / 0B         0
docker ps -a

Shows all 5 containers running successfully.

CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES
a1a4bd1c226b        alpine:3.8          "sleep 3605"        2 minutes ago       Up 2 minutes                            mymem5
9ced89c63a7e        alpine:3.8          "sleep 3604"        4 minutes ago       Up 4 minutes                            mymem4
696f1cef7d57        alpine:3.8          "sleep 3603"        5 minutes ago       Up 5 minutes                            mymem3
77d61012b5fd        alpine:3.8          "sleep 3602"        6 minutes ago       Up 6 minutes                            mymem2
fab3faa6d23d        alpine:3.8          "sleep 3600"        8 minutes ago       Up 8 minutes                            mymem1

We are finished with these containers. We can stop and then prune them.

docker container stop mymem1 -t 0
docker container stop mymem2 -t 0
docker container stop mymem3 -t 0
docker container stop mymem4 -t 0
docker container stop mymem5 -t 0
docker container prune -f 

––memory and --memory-swap (No Swapping Allowed)

From https://docs.docker.com/config/containers/resource_constraints/

  • -m or --memory= The maximum amount of memory the container can use. If you set this option, the minimum allowed value is 4m (4 megabyte).
  • --memory-swap The amount of memory this container is allowed to swap to disk.
  • If --memory-swap is set to the same value as --memory, and --memory is set to a positive integer, the container does not have access to swap

We are now testing no swapping allowed.

We need a tool to carefully allocate RAM on a MB-by-MB basis - so that we can carefully just overstep our defined RAM limits. I decided on Python. ( You do not need to know Python to understand its 4 lines of code used here. )

In the second part of this tutorial we will use actual benchmark tools.

Download Python Docker image if you do not already have it:

docker pull python:3-alpine 

Run our container, limiting RAM: --memory=20m --memory-swap=20m

docker container run -d --memory=20m --memory-swap=20m --name myPython python:3-alpine sleep 3600
docker exec -it myPython /bin/sh        

At the shell prompt, enter python3 to enter interactive Python editor. Cut and paste the code below. In Python spaces has syntax meaning, so be careful not to add any spaces or tabs to the code.

longstring = []
for x in range(17):
    len(longstring)
    longstring.append('1' * 10**6)

Press ENTER to exit the for statement block. This will run the code.

Expected output :

>>> for x in range(17):
...     len(longstring)
...     longstring.append('1' * 10**6)
...
0
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
Killed

We allocated 20 MB RAM to this container. Python uses 5 MB. The for loop gets killed when it tries to append 16 MB of '1' characters to the longstring variable.

Three things of note:

  • RAM allocations within limit of 20 MB worked
  • RAM allocation that exceeded limit got killed
  • No swap used: allocations did not quietly continue to work by using swap

Summary: --memory and --memory-swap ( No swapping allowed ) works when both are set to the same value. Based on your knowledge of the applications running in your containers you should set those values appropriately.

We are finished with this container. You can stop and prune it.

docker container stop myPython
docker container prune -f 

––memory and --memory-swap (Swapping Allowed)

By specifying --memory=20m and --memory-swap=30m we allow 10 MB of swap.

Let's see how that works:

docker container run -d --memory=20m --memory-swap=30m --name myPython python:3-alpine sleep 3600
    
docker exec -it myPython /bin/sh             

At the shell prompt, enter python3 to enter interactive Python editor. Cut and paste the code below. In Python spaces has syntax meaning, so be careful not to add any spaces or tabs to the code.

longstring = []
for x in range(24):
    len(longstring)
    longstring.append('1' * 10**6)

Press ENTER to exit the for statement block. This will run the code.

Expected output :

0 to 24 shown ... no killing

5 MB RAM used by Python. 25 MB RAM allocated above with no errors.

We specified : --memory=20m --memory-swap=30m

We just used 30 MB, meaning 10 MB is swapped. Let's confirm by running top in another shell.

top - 13:20:38 up  4:41,  2 users,  load average: 0.11, 0.05, 0.06
Tasks: 119 total,   1 running, 118 sleeping,   0 stopped,   0 zombie
%Cpu(s):  0.2 us,  0.3 sy,  0.0 ni, 99.5 id,  0.0 wa,  0.0 hi,  0.0 si,  0.0 st
MiB Mem :  985.219 total,  466.879 free,  190.812 used,  327.527 buff/cache
MiB Swap: 1499.996 total, 1490.078 free,    9.918 used.  618.730 avail Mem

  PID USER      PR  NI    VIRT    RES    SHR S  %CPU %MEM     TIME+ COMMAND             SWAP
  933 root      20   0  967.4m  91.5m  24.3m S        9.3   0:45.46 dockerd
  940 root      20   0  579.9m  33.1m  12.3m S   0.3  3.4   0:36.73 docker-containe
11900 root      20   0  253.5m  19.1m  10.5m S        1.9   0:00.25 docker
11941 root      20   0   39.1m  17.4m        S        1.8   0:00.39 python3             9.5m

As expected: 10 MB swap used. ( You will have to show the SWAP field in top. )

Let's carefully try to use 2 MB more RAM - container should run out of RAM.

Cut and paste this in Python editor. Press ENTER to run.

longstring = []
for x in range(26):
    len(longstring)
    longstring.append('1' * 10**6)

Expected output :

it gets killed

We are finished with this container. You can stop and prune it.

docker container stop myPython
docker container prune -f 

Summary: --memory and --memory-swap ( swapping allowed ) works when --memory-swap is larger than --memory.

Limits enforced perfectly.

You need to specify appropriate limits for your containers in your production environment.

Investigate current prod system RAM usage. Define limits according to those, adding a large margin for error, but still preventing runaway containers from crashing the prod server.

--oom-kill-disable

So far the automatically enabled out-of-memory functionality killed our runaway Python program.

Let's see what happens if we disable it.

Note the --oom-kill-disable below:

docker container run -d --oom-kill-disable --memory=20m --memory-swap=30m --name myPython python:3-alpine sleep 3600

Enter our unsuspecting container:

docker exec -it myPython /bin/sh   

Enter python3 editor, paste that code, press ENTER to run it.

python3    
a = []
for x in range(26):
    len(a)
    a.append('1' * 10**6)

The container hangs.

Run top in another shell console:

  PID USER      PR  NI    VIRT    RES    SHR S  %CPU %MEM     TIME+ COMMAND             SWAP
12317 root      20   0   41.0m  17.6m   0.0m D        1.8   0:00.32 python3            10.7m

Our container is in state D : uninterruptible sleep

In another shell:

docker exec -it myPython /bin/sh  

It hanges too.

Let's use another shell to get our hanging container's PID so that we can kill it:

docker inspect myPython

Get the PID.

Use top or kill -9 your-PID to kill it.

Conclusion:

Do not use --oom-kill-disable

Your hanged shells now have a Linux prompt back. You can exit those.

––cpu-shares

From https://docs.docker.com/config/containers/resource_constraints/#cpu

--cpu-shares: Set this flag to a value greater or less than the default of 1024 to increase or reduce the container's weight, and give it access to a greater or lesser proportion of the host machine's CPU cycles.
This is only enforced when CPU cycles are constrained. When plenty of CPU cycles are available, all containers use as much CPU as they need. In that way, this is a soft limit. --cpu-shares does not prevent containers from being scheduled in swarm mode.
It prioritizes container CPU resources for the available CPU cycles. It does not guarantee or reserve any specific CPU access.

The plan: run 3 containers providing them with 100, 500 and 1000 CPU-shares.

The following is a terrible test. Carefully read above descriptions again, then read the next 3 commands and see if you can determine why this will not clearly show those CPU proportions allocated correctly.

Please note these CPU tests assume you are running this on your own computer and not on a shared development server. 3 tests hog 100% CPU for 20 seconds.

Later in this tutorial series we will do these tests using our own bench container using actual Linux benchmark tools. We will specifically focus on running these CPU hogs for very short runtimes and still get accurate results.

However please read and follow these CPU tests so that you can learn to get a feeling of how wrong and slow this quick testing is.

Note that dd, urandom and md5sum are not bench tools either.

The problem is not the dd or its timing.

Our CPU stress application: time dd if=/dev/urandom bs=1M count=2 | md5sum

Benchmark explanation:

  • time ... measures elapsed time: shows those 3 timer lines
  • dd if=/dev/urandom bs=1M count=2 ... copies bs=blocksize one MB of randomness twice
  • md5sum ... calculates md5 security hashes ( give cpu a load )

Let's run it and investigate the results:

docker container run -d --cpu-shares=1024 --name mycpu1024 alpine:3.8 /bin/sh -c 'time dd if=/dev/urandom bs=1M count=100 | md5sum'
docker container run -d --cpu-shares=500 --name mycpu500 alpine:3.8 /bin/sh -c 'time dd if=/dev/urandom bs=1M count=100 | md5sum'
docker container run -d --cpu-shares=100 --name mycpu100 alpine:3.8 /bin/sh -c 'time dd if=/dev/urandom bs=1M count=100 | md5sum'

Let's investigate the logs to determine runtimes:

docker logs mycpu1024
docker logs mycpu500
docker logs mycpu100

Expected output :

docker logs mycpu1024
real    0m 15.29s
user    0m 0.00s
sys     0m 14.51s

docker logs mycpu500
real    0m 18.65s
user    0m 0.00s
sys     0m 15.28s

docker logs mycpu100
real    0m 23.28s
user    0m 0.00s
sys     0m 13.09s

Note all containers used about the same sys cpu time - understandable since they all did the exact same work.

--cpu-shares=100 clearly takes longer, but --cpu-shares=500 only slightly slower than --cpu-shares=1024

The problem is that --cpu-shares=1024 runs very fast, then exits.

Then --cpu-shares=500 and --cpu-shares=100 has full access to CPU.

Then --cpu-shares=500 finishes quickly since it has most CPU shares.

Then --cpu-shares=100 finishes quickly since it has most CPU shares - NOTHING else is running.

Consider this problem and how you could solve it.

Figure it out before reading further.

You are welcome to test your solution.

My solution:

All 3 these containers must run in parallel all the time. The CPU-shares work only when CPU is under stress.

mycpu1024 - count must be set 10 times that of mycpu100
mycpu500 - count must be set 5 times that of mycpu100

This way all 3 containers will probably run roughly same times - based on their CPU-shares they all got CPU-share-appropriate similar workload.

Then divide mycpu1024 runtime by 10 - it got 10 times the workload of mycpu100
Then divide mycpu500 runtime by 5 - it got 10 times the workload of mycpu100

It should be very obvious that Docker divided the CPU-shares appropriately.

Busy Docker administrator shortcut / quick method:

Submit all the above containers to run again.

Have the following ready to run as well.

--cpu-shares=250 and --cpu-shares=200 containers

Then in another shell run docker stats and press ctrl c to freeze display.

It should be obvious the CPU-shares got allocated correctly.

Clean up containers:

docker container prune -f 

--cpu-shares Identically Allocated

--cpu-shares: Set this flag to increase or reduce the container's weight, and give it access to a greater or lesser proportion of the host machine's CPU cycles.

This means that equal CPU-shares setting would mean equal CPU shares.

Let's have 3 containers running, all with CPU-shares = 1024.

docker container run -d --cpu-shares=1024 --name mycpu1024a alpine:3.8 /bin/sh -c 'time dd if=/dev/urandom bs=1M count=100 | md5sum'
docker container run -d --cpu-shares=1024 --name mycpu1024b alpine:3.8 /bin/sh -c 'time dd if=/dev/urandom bs=1M count=100 | md5sum'
docker container run -d --cpu-shares=1024 --name mycpu1024c alpine:3.8 /bin/sh -c 'time dd if=/dev/urandom bs=1M count=100 | md5sum'

Run:

CONTAINER ID        NAME                CPU %               MEM USAGE / LIMIT     MEM %               NET I/O             BLOCK I/O           PIDS
c4625617f339        mycpu1024c          63.79%              1.262MiB / 985.2MiB   0.13%               648B / 0B           1.33MB / 0B         0
44362316e33a        mycpu1024b          68.44%              1.254MiB / 985.2MiB   0.13%               648B / 0B           1.33MB / 0B         0
a704aca5c0d7        mycpu1024a          66.27%              1.254MiB / 985.2MiB   0.13%               648B / 0B           1.35MB / 0B         0

As expected, all 3 containers get same percentage CPU times.

docker logs mycpu1024a
docker logs mycpu1024b
docker logs mycpu1024c

Just to confirm that they all ran the same elapsed times

docker logs mycpu1024a
real    0m 21.25s
user    0m 0.00s
sys     0m 14.72s

docker logs mycpu1024b
real    0m 22.53s
user    0m 0.00s
sys     0m 15.21s

docker logs mycpu1024c
real    0m 21.45s
user    0m 0.00s
sys     0m 15.09s

Prune containers, we are done with them.

docker container prune -f 
0 0 0
Share on

Alibaba Clouder

2,605 posts | 747 followers

You may also like

Comments

Alibaba Clouder

2,605 posts | 747 followers

Related Products