Alibaba Cloud Container Service has fixed the runc vulnerability CVE-2019-5736. This topic describes the impacts and fixes of the vulnerability.

Background information

Docker, containerd, or other runc-based containers have security vulnerabilities at runtime. When the host machine is at the runc runtime, an attacker can overwrite the host runc binary and consequently obtain host root access by leveraging the ability to execute a command as root within an existing container, to which the attacker previously had write access, that can be attached with docker exec.

For more information about the vulnerability CVE-2019-5736, see Alibaba Cloud vulnerability library.
Note On the homepage of the vulnerability library, click Search CVE List, enter CVE-2019-5736, and then click Submit to go to the details page of the vulnerability CVE-2019-5736.

Impacts

  • The impact on Alibaba Cloud Container Service is as follows:

    Docker Swarm clusters and Kubernetes clusters that use Docker versions that are earlier than 18.09.2.

  • The impact on the user-defined Docker/Kubernetes environment is as follows:

    Environments that use Docker versions that are earlier than 18.09.2 or runc versions that are earlier than 1.0-rc6.

Fixes

Alibaba Cloud Container Service has fixed this vulnerability. The Docker version in the Kubernetes cluster version 1.11 or Kubernetes cluster version 1.12 has fixed this vulnerability. You can use the following methods to fix the vulnerabilities in a cluster:

  • Upgrade Docker. Upgrade Docker of the existing cluster to version 18.09.2 or later versions. The solution will lead to the disruption of container businesses.
  • Upgrade runc only (for Docker version 17.06) To avoid service interruptions caused by the Docker engine upgrade, you can follow the following steps to upgrade the runc binary on the cluster nodes.
    1. Run the following command to locate docker-runc. docker-runc is typically located in the /usr/bin/docker-runc path.
      which docker-runc
    2. Run the following command to back up the original runc:
      mv /usr/bin/docker-runc /usr/bin/docker-runc.orig.$(date -Iseconds)
    3. Run the following command to download the fixed runc:
      curl -o /usr/bin/docker-runc -sSL https://acs-public-mirror.oss-cn-hangzhou.aliyuncs.com/runc/docker-runc-17.06-amd64
    4. Run the following command to set the execution permissions for docker-runc
      chmod +x /usr/bin/docker-runc
    5. Run the following command to test whether runc works properly:
      docker-runc -v
      # runc version 1.0.0-rc3
      # commit: fc48a25bde6fb041aae0977111ad8141ff396438
      # spec: 1.0.0-rc5
      docker run -it --rm ubuntu echo OK
    6. For GPU nodes in a Kubernetes cluster, you need to follow the following steps to install nvidia-runtime.
      1. Run the following command to locate nvidia-container-runtime. nvidia-container-runtime is typically located in the /usr/bin/nvidia-container-runtime path.
        which nvidia-container-runtime
      2. Run the following command to backup the original nvidia-container-runtime:
        mv /usr/bin/nvidia-container-runtime /usr/bin/nvidia-container-runtime.orig.$(date -Iseconds)
      3. Run the following command to download the fixed nvidia-container-runtime:
        curl -o /usr/bin/nvidia-container-runtime -sSL https://acs-public-mirror.oss-cn-hangzhou.aliyuncs.com/runc/nvidia-container-runtime-17.06-amd64
      4. Run the following command to set the execution permissions for nvidia-container-runtime:
        chmod +x /usr/bin/nvidia-container-runtime
      5. Run the following command to test whether nvidia-container-runtime can work normally:
        nvidia-container-runtime -v
        #  runc version 1.0.0-rc3
        #  commit: fc48a25bde6fb041aae0977111ad8141ff396438-dirty
        #  spec: 1.0.0-rc5
        
        docker run -it --rm -e NVIDIA_VISIBLE_DEVICES=all ubuntu nvidia-smi -L
        #  GPU 0: Tesla P100-PCIE-16GB (UUID: GPU-122e199c-9aa6-5063-0fd2-da009017e6dc)
        Note This test runs on the GPU P100 model, and the test methods vary with the GPU model.