All Products
Search
Document Center

Container Service for Kubernetes:Best practice for throttling block storage devices

Last Updated:Mar 26, 2026

Container Service for Kubernetes (ACK) lets you set I/O limits on block storage devices mounted to a pod, such as a cloud disk or a local disk virtualized with Logical Volume Manager (LVM). Open source Kubernetes does not support this capability. Throttling prevents a single pod from monopolizing disk I/O and degrading other workloads on the same node.

Prerequisites

Before you begin, ensure that you have:

  • A host OS of Alibaba Cloud Linux 2 or later

  • An ACK cluster running Kubernetes 1.20 or later

  • Container Storage Interface (CSI) plug-in version 1.22 or later. For details, see csi-provisioner

Throttling parameters

Set the following parameters in a StorageClass (dynamic provisioning) or in the volumeAttributes section of a PersistentVolume (static provisioning):

ParameterDescriptionExample value
readIOPSMaximum read operations per second for a pod"100"
writeIOPSMaximum write operations per second for a pod"10"
readBPSMaximum bytes read per second for a pod"100k", "100m"
writeBPSMaximum bytes written per second for a pod"100k", "100m"
Important

Throttling settings configured for a cloud disk cannot be changed after the disk is created. Adjusting the throttling parameters in a StorageClass only applies to newly created cloud disks—existing disks are not affected.

Throttle a cloud disk

The following sections show how to configure throttling for a cloud disk mounted as a dynamically or statically provisioned volume.

Dynamic provisioning

  1. Create a file named alicloud-disk-topology-essd.yaml with the following content:

    • parameters.readIOPS / parameters.writeIOPS: IOPS limits applied to each pod using this StorageClass.

    • parameters.readBPS / parameters.writeBPS: Throughput limits.

    • volumeBindingMode: WaitForFirstConsumer: Delays volume creation until a pod is scheduled, ensuring the disk is created in the correct availability zone.

    apiVersion: storage.k8s.io/v1
    kind: StorageClass
    metadata:
      name: alicloud-disk-topology-essd
    parameters:
      type: cloud_essd
      readIOPS: "100"
      writeIOPS: "10"
      readBPS: "100k"
      writeBPS: "100m"
    provisioner: diskplugin.csi.alibabacloud.com
    reclaimPolicy: Delete
    volumeBindingMode: WaitForFirstConsumer

    Key fields:

  2. Create the StorageClass:

    kubectl apply -f alicloud-disk-topology-essd.yaml
  3. Create a file named nginx.yaml with the following content. The StatefulSet references the StorageClass to dynamically provision an ESSD cloud disk as a volume.

    apiVersion: apps/v1
    kind: StatefulSet
    metadata:
      name: web-csi-encrypt
    spec:
      selector:
        matchLabels:
          app: nginx
      serviceName: "nginx"
      podManagementPolicy: "Parallel"
      replicas: 1
      template:
        metadata:
          labels:
            app: nginx
        spec:
          hostNetwork: true
          containers:
          - name: nginx
            command:
            - sleep
            - "999999999"
            image: anolis-registry.cn-zhangjiakou.cr.aliyuncs.com/openanolis/nginx:1.14.1-8.6
            volumeMounts:
            - name: disk-csi
              mountPath: /data
      volumeClaimTemplates:
      - metadata:
          name: disk-csi
        spec:
          accessModes: [ "ReadWriteOnce" ]
          storageClassName: alicloud-disk-topology-essd
          resources:
            requests:
              storage: 80Gi
  4. Deploy the application:

    kubectl apply -f nginx.yaml
  5. Verify that the throttling limits are applied. See Verify throttling limits.

Static provisioning

  1. Create a file named pv-static.yaml with the following content. Replace d-**** with your cloud disk ID.

    • volumeAttributes.readIOPS / volumeAttributes.readBPS: Throttling parameters for this specific disk.

    • volumeHandle: The ID of the existing cloud disk to use.

    apiVersion: v1
    kind: PersistentVolume
    metadata:
      name: d-****
    spec:
      accessModes:
      - ReadWriteMany
      capacity:
        storage: 80Gi
      csi:
        driver: diskplugin.csi.alibabacloud.com
        fsType: ext4
        volumeAttributes:
          app: nginx
          type: cloud_ssd
          readBPS: 100K
          readIOPS: "100"
        volumeHandle: d-****
      persistentVolumeReclaimPolicy: Retain
      volumeMode: Filesystem

    Key fields:

  2. Create the PersistentVolume (PV):

    kubectl apply -f pv-static.yaml
  3. Create a file named pvc-static.yaml with the following content:

    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
      labels:
        app: nginx
      name: disk-pvc
      namespace: default
    spec:
      accessModes:
      - ReadWriteMany
      resources:
        requests:
          storage: 80Gi
      volumeMode: Filesystem
      volumeName: d-****
  4. Create the PersistentVolumeClaim (PVC):

    kubectl apply -f pvc-static.yaml
  5. Create a file named nginx.yaml with the following content. The Deployment uses the PVC to mount the statically provisioned cloud disk.

    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: deploy-disk
      labels:
        app: nginx
    spec:
      selector:
        matchLabels:
          app: nginx
      template:
        metadata:
          labels:
            app: nginx
        spec:
          containers:
          - name: nginx
            image: anolis-registry.cn-zhangjiakou.cr.aliyuncs.com/openanolis/nginx:1.14.1-8.6
            ports:
            - containerPort: 80
            volumeMounts:
            - name: disk-pvc2
              mountPath: /data
          volumes:
            - name: disk-pvc2
              persistentVolumeClaim:
                claimName: disk-pvc
  6. Deploy the application:

    kubectl apply -f nginx.yaml
  7. Verify that the throttling limits are applied. See Verify throttling limits.

Verify throttling limits

Use the cgroup paths on the node to confirm that the IOPS and BPS limits are set correctly.

Check the cgroup version

Log in to the node and run:

stat -fc %T /sys/fs/cgroup
  • If the output is cgroup2fs, the node uses cgroup V2.

  • Otherwise, the node uses cgroup V1.

Verify limits on cgroup V2

Construct the path using the pod UID and check the io.max file:

/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod<UID>.slice/io.max

Verify limits on cgroup V1

Construct the paths using the pod UID and check the following files:

Metriccgroup V1 path
readIOPS/sys/fs/cgroup/blkio/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod{pod_uid}.slice/blkio.throttle.read_iops_device
writeIOPS/sys/fs/cgroup/blkio/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod{pod_uid}.slice/blkio.throttle.write_iops_device
readBPS/sys/fs/cgroup/blkio/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod{pod_uid}.slice/blkio.throttle.read_bps_device
writeBPS/sys/fs/cgroup/blkio/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod{pod_uid}.slice/blkio.throttle.write_bps_device