You can use the Container Service Interface (CSI) plug-in to directly mount a disk to a sandboxed container. This significantly improves I/O performance. This topic describes how to use the CSI plug-in to directly mount a disk to a sandboxed container and provides an example to demonstrate the mounting process. A disk can also be mounted to a container through the host or the 9PFS file system. This topic compares the I/O performance in the three mount modes.

Background information

You can mount disks to sandboxed containers by using a CSI plug-in. In the community solution, a disk is first mounted to the host, formatted, and then mounted to a local directory. This local directory is then shared to a container through 9PFS. However, 9PFS drastically degrades the I/O performance of containers. To improve I/O performance, Container Service for Kubernetes (ACK) allows you to mount disks directly to sandboxed containers through a CSI plug-in. This feature allows you to mount a disk to a container after the container is started. Previously, you can only mount a disk to the host before the container is started. This improves the I/O performance because 9PFS is no longer required for a container to access a disk.

Starting from Sandboxed-Container v1.1.0, ACK enables this feature by default.
Figure 1. Comparison between the community solution and ACK solution
Solutions for mounting disks to containers

How the direct mount solution works

How the direct mount solution works

The following table describes how to use a CSI plug-in to mount a disk to a sandboxed container.

Step Description
The kubelet requests the CSI plug-in to mount a disk.
The CSI plug-in sends a request to QueryServer to query whether a mounted volume that corresponds to the disk exists. QueryServer is a local database that stores information about mounted volumes.
If no such mounted volume is found, information about the disk, such as the mount point and mounted directory, is written to QueryServer and then the disk is formatted.
When a pod is ready, the kubelet starts to create a container. The request to mount the disk is eventually forwarded to Kata-Runtime.
Kata-Runtime sends a request to QueryServer to query information about the disk, including the mount point and mounted directory.
Kata-Runtime sends a request to Kata-Agent.
Kata-Agent starts a container and mounts the disk to the container.

Example

The following example describes how to mount a disk to a sandboxed container by using a YAML file template to create resource objects.

  1. Use the following template to create resource objects:
    cat <<EOF | kubectl create -f -
    allowVolumeExpansion: true
    apiVersion: storage.k8s.io/v1
    kind: StorageClass
    metadata:
      name: alicloud-disk-ssd
    parameters:
      type: cloud_ssd
    provisioner: diskplugin.csi.alibabacloud.com
    reclaimPolicy: Delete
    ---
    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
      name: disk-pvc-01
      namespace: default
    spec:
      accessModes:
        - ReadWriteOnce
      resources:
        requests:
          storage: 25Gi
      storageClassName: alicloud-disk-ssd
    ---
    apiVersion: apps/v1
    kind: Deployment
    metadata:
      labels:
        app: busybox
      name: busybox
      namespace: default
    spec:
      replicas: 1
      selector:
        matchLabels:
          app: busybox
      template:
        metadata:
          labels:
            app: busybox
        spec:
          containers:
            - name: busybox
              image: registry.cn-hangzhou.aliyuncs.com/acs/busybox:v1.29.2
              command:
              - tail
              - -f
              - /dev/null
              volumeMounts:
                - mountPath: "/data"
                  name: disk-pvc
          dnsPolicy: ClusterFirst
          restartPolicy: Always
          runtimeClassName: runv
          volumes:
            - name: disk-pvc
              persistentVolumeClaim:
                claimName: disk-pvc-01
    EOF
  2. Run the following commands to check the mount point type in the pod.
    kubectl get pods
    kubectl exec -it ${podid} sh
    mount | grep /data | grep -vi 9p
    If the mount point type is not 9PFS, the disk is directly mounted to the container.

I/O performance comparison

  • Random read operations per second

    Mount mode Result
    Host read: IOPS=2571, BW=10.0MiB/s (10.5MB/s)(604MiB/60094msec)
    CSI plug-in read: IOPS=2571, BW=10.0MiB/s (10.5MB/s)(603MiB/60006msec)
    9PFS read: IOPS=2558, BW=9.99MiB/s (10.5MB/s)(600MiB/60001msec)
  • Random write operations per second

    Mount mode Result
    Host write: IOPS=2481, BW=9926KiB/s (10.2MB/s)(582MiB/60011msec)
    CSI plug-in write: IOPS=2481, BW=9926KiB/s (10.2MB/s)(582MiB/60005msec)
    9PFS write: IOPS=1280, BW=5123KiB/s (5246kB/s)(300MiB/60001msec)
  • Random read throughput

    Mount mode Result
    Host read: IOPS=133, BW=133MiB/s (140MB/s)(8110MiB/60926msec)
    CSI plug-in read: IOPS=133, BW=133MiB/s (140MB/s)(8052MiB/60514msec)
    9PFS read: IOPS=10, BW=10.0MiB/s (10.5MB/s)(603MiB/60079msec)
  • Random write throughput

    Mount mode Result
    Host write: IOPS=130, BW=130MiB/s (137MB/s)(7854MiB/60251msec)
    CSI plug-in write: IOPS=130, BW=131MiB/s (137MB/s)(7907MiB/60370msec)
    9PFS write: IOPS=5, BW=5123KiB/s (5246kB/s)(301MiB/60159msec)
Notice By default, qemu cache is enabled for sandboxed containers to improve the I/O performance of 9PFS. The preceding results are recorded when qemu cache is disabled.

According to the preceding data, the throughput and IOPS are almost the same when the disk is directly mounted to the container and when the disk is mounted to the host. The direct mount and 9PFS mount modes have similar numbers of random read operations per second. However, the direct mount mode provides better performance in terms of random write operations per second, random read throughput, and random write throughput.