Elizabeth
Engineer
Engineer
  • UID625
  • Fans2
  • Follows1
  • Posts68
Reads:1064Replies:0

An in-depth explanation of DockerCon 2016: Docker security

Created#
More Posted time:Jan 9, 2017 14:24 PM
This article will introduce some technologies, ecology and the latest features related to Docker security.
Container technology
The earliest form of container technology can be traced back to the chroot command in Unix in 1979. The Linux directory structure starts from the root directory /, and under the root directory are the directories of boot, usr, var, and proc. The chroot command can change the root directory of the program. It doesn't regard the root directory of the program as the true root directory of the system. Rather, it views it as a set of directory structure additionally prepared by the administrator. One important reason for doing so is for security: the program after the chroot processing won't be able to visit the true directory structure and files of the system.
This technology and ideology gradually evolved into the Namespace on Linux. Apart from the file system, PID, IPC and networks can also be isolated. Namespace, together with Cgroups for managing resources, constitute the container technology on Linux. The container technology is also called the operating system-level virtualization. For processes running in a container, the container is like a separate server on which the process can run on. Only the processes in the container, the directory structure of the container, and the network stacks of the container are visible to the process. Even resources are limited - if the memory a process occupies exceeds the limit, the process will be killed. Apart from the virtualization, the container allows multiple applications to run concurrently without interfering with each other, significantly ensuring the security.
Containers enhance the security, but it is far from adequate. No software is perfect, so is the Linux kernel. Containers cannot fully limit the processes running in them. There used to be a loophole which allows processes to break through the container to access resources on the host machine. In addition to program loopholes, different containers share a Linux kernel. Operations can be executed to the kernel directly through the container and kernel vulnerabilities may be exploited to launch attacks on the system.
Security technologies for Linux system
Linux contains many security features and the container technology is just one of them. Overview of Linux Kernel Security Features has provided a long list.
Docker security
When Docker was first launched, I once said the Docker is equal to Container + Image. Although it is a comment from the technical implementation perspective, entirely a product of a programmer’s mindset, it is helpful for us to understand the underlying principles. Since Docker is container-based, it is endowed with the security advantage of containers: you can virtualize an isolated running environment and limit the resources. But has Docker solved the security problems which failed to be solved by containers: escape and shared kernel? Are there other new security issues?
Docker has been paying attention to the security issue. Its earlier versions have supported setting capabilities. By dropping some capabilities, the permissions of processes running in the container can be crippled. Docker provides convenient mechanisms to integrate with Log-Structure Merge (LSM) trees and facilitate the configuration of SELinux or Apparmor, enhancing the container security.
 
The figure above presents a comparison between the security features supported by Docker/LXC/CoreOS Rkt.
Docker image security
The image is the only mechanism of Docker-based application distribution, and its importance is self-evident. What security issues does the image have? First, the apps, libraries and configurations in the image may harbor vulnerabilities. A report last year said more than 30% of official images have severe security vulnerabilities, because of inclusion of some flawed apps and libraries, such as the OpenSSL HeartBleed vulnerability. In addition, speaking of the integrity check of Docker images, how do you know the image you are running has never been tampered with?
Docker mainly provides two features with regard to the image integrity: in Registry V2, the image storage and image signature for the introduced content are addressable. Continuing from the above, let's first briefly review the image downloading process (Registry V2):
1. Download the Manifest. Manifest contains some meta information about the image and IDs of all the layers that constitute the image. The layer ID is the SHA256 abstract of the layer content (that is, the content is addressable).
2. Download various layers.
After the client downloads the image, it re-calculates the SHA256 value. If the value does not match the layer ID, the verification fails. This method not only solves the data tampering issue during the transmission process, but also prevents damages to the layer content.
The image signature ensures the integrity of the image during its distribution. In the distribution process, the intermediary can modify the layer content and the Manifest at the same time. For the client that has downloaded the image, the SHA256 verification is passed, but the downloaded image is not the expected one any more - it contains something strange. With the image signature, the image builder can add his/her own digital signature in the Manifest. The client can download the Manifest and then verifies whether the Manifest has been tampered with based on the signature.
Security ecology
Currently many companies are engaging in Docker security-related businesses. In DockerCon2016, we can see Twistlock and Aqua. Even Docker itself is providing some security products, such as the image security scanning tool of Docker Cloud. There are also some open-source products in the industry. Coreos recently made its image scanning tool Clair open-source.
Twistlock and Aqua are quite alike in product features. They both offer image scanning, runtime container scanning and access control features. Here the access control refers to taking over the Docker to control the API. That is to say, the Docker is not directly accessed. Instead, the Twistlock or Aqua programs are accessed first and they will forward the requests to the Docker. Before the forwarding, the requests will receive permission checks and other operations.
 
Twistlock architecture: composed of the console and agents running on the Docker machine.
Because multiple Docker containers on one host machine share a kernel. Docker is not suitable for multi-tenant scenarios, according to a view in the industry. Now that the kernel cannot be shared, the most straightforward way will be to allocate a kernel for every container, which is what Hyper is doing. Another way is to utilize SELinux/Apparmor and seccomp to configure a safe sandbox. This can also achieve multi-tenant security. The traditional LXC-based PaaS adopts this approach.
Summary
Docker security is a big topic, covering the operating system kernel and enterprise security strategies, including data, network, and storage security solutions among others. This article only serves to offer a brief introduction.
In general, never expose Daemon access to the internet without any protection measures while using Docker, like what is mentioned at the beginning of this article. Second, you should consider the combined usage of various kernel security solutions for multi-tenant environments or other scenarios with a high requirement on the running environment security, so that in-depth defense can be realized. Or you can choose a virtualization + lightweight kernel. At last, eliminate any security defects in the provided external services leveraging the image scanning tools.
Guest