Docker is an open source engine that easily creates a lightweight, portable, and self-sufficient container for any application. This article describes how to use the security hardening scheme of the Docker service to build a secure and reliable container integrated environment.
Before deploying Docker, perform security hardening for the server operating system, for example, updating all software patches, configuring strong passwords, and disabling unnecessary service ports. For more information, see the following sections:
Enable Mandatory Access Control (MAC) to set access control for various resources in Docker based on analysis of business scenarios.
Run the following command to enable AppArmor:
docker run --interactive --tty --security-opt="apparmor:PROFILENAME" centos
Run the following command to enable SELinux:
docker daemon --selinux-enabled
Based on the actual scenario, sort out the ports that must access the Internet (such as management interface, API 2375, and other important ports) and the network addresses, ports, and protocols that must interact with the Internet.
Use iptables or ECS security group policies to set strict network access control.
In Docker, some operations must be performed as the root user. For security reasons, you must separate such operations from those that require only the common user permissions.
For example, when configuring the dockerfile, you can use the following command to create a user with common permissions (UID=
noroot) and allow the created user to run the specified program:
RUN useradd noroot -u 1000 -s /bin/bash --no-create-home
For more information about Docker commands, see the following:
By default, a Docker container has no privilege and cannot access any device. However, when the
--privileged option is enabled, the container can access all devices.
For example, when
--privileged is enabled, the container can operate all devices under
/dev/ on the host. If it is not necessary to access all devices on the host, you can use
--device to add only devices to be operated.
Control the CPU share
Docker provides the
–cpu-sharesparameter to specify the CPU share used when a container is created.
Example: When the command
docker run -tid –cpu-shares 100 ubuntu:stressis run to create a container, the CPU share configuration of the generated cgroup can be located in the following file.
root@ubuntu:~# cat /sys/fs/cgroup/cpu/docker/<Complete ID of the container>/cpu.shares
The value of
cpu-sharesis only an elastic weight and does not guarantee that one vCPU or the specified CPU resources in GHz can be obtained.
Docker provides the
–cpu-quotaparameters to specify the CPU clock cycle that can be allocated to a container.
–cpu-periodspecifies the interval for the container to reallocate the CPU resources, and
–cpu-quotaspecifies the maximum time used to run the container within the specified cycle. Different from
cpu-quotaspecifies an absolute value and is inelastic. The number of CPU resources used by the container does not exceed the configured value.
The values of
cpu-quotaare in μs. The minimum and maximum values of
cpu-periodare 1000 μs and 1s (10^6 μs), respectively. The default value is 0.1s (100,000 μs). The default value of
cpu-quotais –1, indicating that no quota is specified.
For example, if a container process needs to use 0.2s of a CPU every 1s, you can set
cpu-periodto 1000000 (1s) and
cpu-quotato 200000 (0.2s). For multi-core CPUs, if a container process needs to completely occupy two CPUs, you can set
cpu-periodto 100000 (0.1s) and
cpu-quotato 200000 (0.2s).
docker run -tid –cpu-period 100000 –cpu-quota 200000 ubuntuto create a container.
Control the CPU core
You can use the
–cpuset-mems parameters on a server with a multi-core CPU to specify the CPU cores and memory nodes used for container running.
This function can be used to provide optimal configuration for containers that require high-performance computing, especially for servers with the NUMA topology (multiple CPU cores and multiple memory nodes). However, if a server has only one memory node, the configuration of
–cpuset-mems basically has no effect.
docker run -tid –name cpu1 –cpuset-cpus 0-2 ubuntu to restrict the created container to use only cores 0, 1, and 2.
Use CPU quota control parameters together
In the preceding parameters,
cpu-shares is used only when containers compete for the time slice of the same core. If you use
cpuset-cpus to specify that container A uses core 0 and container B uses core 1, only these two containers use the cores on the host and respectively occupy all resources of its own core.
cpu-shares has no effect.
cpu-quota are usually used together. When a single-core CPU is used or the container is forced to use only one CPU core by setting
cpuset-cpus, the container does not use more CPU resources even if the value of
cpu-quota exceeds that of
cpuset-mems are available only for servers with multiple CPU cores and multiple memory nodes and must match the actual physical configuration. Otherwise, the purpose of resource control cannot be achieved.
Like CPU control, Docker provides some parameters to control the memory quota for containers, such as the container swap size and available memory size. The following parameters are used:
memory-swappiness: Controls the tendency that the process exchanges the physical memory for the swap partition. The default coefficient is 60. The smaller the coefficient, the more likely that the process uses the physical memory. The value ranges from 0 to 100. When the value is 100, the swap partition is used as much as possible. If the value is 0, the swap function is disabled. This is different from the host, in which swap is not disabled when
swappinessis set to 0.
–kernel-memory: Kernel memory, which is not exchanged to the swap partition. Generally, we recommend that you do not change the value of this parameter. For more information, see the Docker official documentation.
–memory: Specifies the maximum memory size used by the container. The default unit is byte. You can use strings in kilobytes, gigabytes, or megabytes.
–memory-reservation: Enables elastic memory sharing. When the host resources are sufficient, the container can use the memory as much as possible. When memory competition or a low memory size is detected, the memory used by the container is forcibly reduced to the size specified by
memory-reservation. If this option is not set, some containers may occupy a large amount of memory for a long term, resulting in performance loss.
–memory-swap: Sum of the memory and swap partition sizes. When this parameter is set to –1, it indicates that the swap partition has an unlimited size. The default unit is byte. You can use strings in kilobytes, gigabytes, or megabytes. If the value set for
–memory-swapis smaller than that of
–memory, the default value is used, which is twice the value of
Only run trusted Docker images as the Internet servers and do not run any Docker image that is not fully understood.
Docker logs include standard outputs (stdout) and file logs. The Docker-supported log levels include debug, info, warn, error, and fatal, and the default log level is info.
If necessary, you must set the log level by modifying the configuration file or enabling the parameter
--log-level. The following methods are available:
Modify the configuration file
--log-driver=syslog --log-opt syslog-facility=daemonwhen using
You can use a vulnerability scanner in a production environment to detect known vulnerabilities in the image.
Generally, containers are not built from scratch. Therefore, you must perform security scanning to find any vulnerabilities in the basic image in time and update patches.
Add security quality control of vulnerability scanning to the application delivery lifecycle to prevent deployment of vulnerable containers.
By taking the preceding positive preventive measures, security policies are established and implemented throughout the entire lifecycle of the container, effectively guaranteeing the security of an integrated container environment.