All Products
Search
Document Center

Harden Docker service

Last Updated: May 07, 2018

Docker is an open source engine that easily creates a lightweight, portable, and self-sufficient container for any application. This article describes how to use the security hardening scheme of the Docker service to build a secure and reliable container integrated environment.

Harden the host operating system

Before deploying Docker, perform security hardening for the server operating system, for example, updating all software patches, configuring strong passwords, and disabling unnecessary service ports. For more information, see the following sections:

Use a MAC policy

Enable Mandatory Access Control (MAC) to set access control for various resources in Docker based on analysis of business scenarios.

Run the following command to enable AppArmor:

 
  1. docker run --interactive --tty --security-opt="apparmor:PROFILENAME" centos
  2. /bin/bash

Run the following command to enable SELinux:

 
  1. docker daemon --selinux-enabled

Configure a strict network access control policy

Based on the actual scenario, sort out the ports that must access the Internet (such as management interface, API 2375, and other important ports) and the network addresses, ports, and protocols that must interact with the Internet.

Use iptables or ECS security group policies to set strict network access control.

Do not run Docker as the root user

In Docker, some operations must be performed as the root user. For security reasons, you must separate such operations from those that require only the common user permissions.

For example, when configuring the dockerfile, you can use the following command to create a user with common permissions (UID=noroot) and allow the created user to run the specified program:

 
  1. RUN useradd noroot -u 1000 -s /bin/bash --no-create-home
  2. USER noroot
  3. RUN Application_name

For more information about Docker commands, see the following:

Prohibit the privilege option

By default, a Docker container has no privilege and cannot access any device. However, when the --privileged option is enabled, the container can access all devices.

For example, when --privileged is enabled, the container can operate all devices under /dev/ on the host. If it is not necessary to access all devices on the host, you can use --device to add only devices to be operated.

Control the Docker container resource quota

CPU resource quota

Control the CPU share

  • Docker provides the –cpu-shares parameter to specify the CPU share used when a container is created.

    Example: When the command docker run -tid –cpu-shares 100 ubuntu:stress is run to create a container, the CPU share configuration of the generated cgroup can be located in the following file.

       
    1. root@ubuntu:~# cat /sys/fs/cgroup/cpu/docker/<Complete ID of the container>/cpu.shares
    2. 100

    The value of cpu-shares is only an elastic weight and does not guarantee that one vCPU or the specified CPU resources in GHz can be obtained.

  • Docker provides the –cpu-period and –cpu-quota parameters to specify the CPU clock cycle that can be allocated to a container.

    –cpu-period specifies the interval for the container to reallocate the CPU resources, and –cpu-quota specifies the maximum time used to run the container within the specified cycle. Different from –cpu-shares, cpu-quota specifies an absolute value and is inelastic. The number of CPU resources used by the container does not exceed the configured value.

    The values of cpu-period and cpu-quota are in μs. The minimum and maximum values of cpu-period are 1000 μs and 1s (10^6 μs), respectively. The default value is 0.1s (100,000 μs). The default value of cpu-quota is –1, indicating that no quota is specified.

    For example, if a container process needs to use 0.2s of a CPU every 1s, you can set cpu-period to 1000000 (1s) and cpu-quota to 200000 (0.2s). For multi-core CPUs, if a container process needs to completely occupy two CPUs, you can set cpu-period to 100000 (0.1s) and cpu-quota to 200000 (0.2s).

    Example: Run docker run -tid –cpu-period 100000 –cpu-quota 200000 ubuntu to create a container.

Control the CPU core

You can use the –cpuset-cpus and –cpuset-mems parameters on a server with a multi-core CPU to specify the CPU cores and memory nodes used for container running.

This function can be used to provide optimal configuration for containers that require high-performance computing, especially for servers with the NUMA topology (multiple CPU cores and multiple memory nodes). However, if a server has only one memory node, the configuration of –cpuset-mems basically has no effect.

Example: Run docker run -tid –name cpu1 –cpuset-cpus 0-2 ubuntu to restrict the created container to use only cores 0, 1, and 2.

Use CPU quota control parameters together

In the preceding parameters, cpu-shares is used only when containers compete for the time slice of the same core. If you use cpuset-cpus to specify that container A uses core 0 and container B uses core 1, only these two containers use the cores on the host and respectively occupy all resources of its own core. cpu-shares has no effect.

cpu-period and cpu-quota are usually used together. When a single-core CPU is used or the container is forced to use only one CPU core by setting cpuset-cpus, the container does not use more CPU resources even if the value of cpu-quota exceeds that of cpu-period.

cpuset-cpus and cpuset-mems are available only for servers with multiple CPU cores and multiple memory nodes and must match the actual physical configuration. Otherwise, the purpose of resource control cannot be achieved.

Memory quota

Like CPU control, Docker provides some parameters to control the memory quota for containers, such as the container swap size and available memory size. The following parameters are used:

  • memory-swappiness: Controls the tendency that the process exchanges the physical memory for the swap partition. The default coefficient is 60. The smaller the coefficient, the more likely that the process uses the physical memory. The value ranges from 0 to 100. When the value is 100, the swap partition is used as much as possible. If the value is 0, the swap function is disabled. This is different from the host, in which swap is not disabled when swappiness is set to 0.

  • –kernel-memory: Kernel memory, which is not exchanged to the swap partition. Generally, we recommend that you do not change the value of this parameter. For more information, see the Docker official documentation.

  • –memory: Specifies the maximum memory size used by the container. The default unit is byte. You can use strings in kilobytes, gigabytes, or megabytes.

  • –memory-reservation: Enables elastic memory sharing. When the host resources are sufficient, the container can use the memory as much as possible. When memory competition or a low memory size is detected, the memory used by the container is forcibly reduced to the size specified by memory-reservation. If this option is not set, some containers may occupy a large amount of memory for a long term, resulting in performance loss.

  • –memory-swap: Sum of the memory and swap partition sizes. When this parameter is set to –1, it indicates that the swap partition has an unlimited size. The default unit is byte. You can use strings in kilobytes, gigabytes, or megabytes. If the value set for –memory-swap is smaller than that of –memory, the default value is used, which is twice the value of –memory-swap.

Only run trusted Docker images

Only run trusted Docker images as the Internet servers and do not run any Docker image that is not fully understood.

Enable the logging function

Docker logs include standard outputs (stdout) and file logs. The Docker-supported log levels include debug, info, warn, error, and fatal, and the default log level is info.

If necessary, you must set the log level by modifying the configuration file or enabling the parameter -l or --log-level. The following methods are available:

  • Modify the configuration file /etc/docker/daemon.json.

       
    1. {
    2. "log-level": "debug"
    3. }
  • Specify --log-driver=syslog --log-opt syslog-facility=daemon when using docker run.

Periodically scan vulnerabilities and update patches

You can use a vulnerability scanner in a production environment to detect known vulnerabilities in the image.

Generally, containers are not built from scratch. Therefore, you must perform security scanning to find any vulnerabilities in the basic image in time and update patches.

Add security quality control of vulnerability scanning to the application delivery lifecycle to prevent deployment of vulnerable containers.

Summary

By taking the preceding positive preventive measures, security policies are established and implemented throughout the entire lifecycle of the container, effectively guaranteeing the security of an integrated container environment.