All Products
Search
Document Center

Container Service for Kubernetes:Use a statically provisioned disk volume

Last Updated:Mar 26, 2026

Mount an existing cloud disk to a pod as a PersistentVolume (PV) for high-throughput, low-latency persistent storage in your ACK cluster.

With static provisioning, you create the PV and PersistentVolumeClaim (PVC) manually before deploying your workload. This gives you full control over which disk is used and keeps the disk independent of the pod lifecycle.

Use cases

Static disk volumes are a good fit when:

  • Your application requires high disk I/O — for example, running MySQL or Redis.

  • You need to write logs at high speed.

  • You have an existing disk that you want to reuse across pod restarts.

For more information about block storage volumes, see Block storage volumes.

Prerequisites

Before you begin, ensure that you have:

  • The Container Storage Interface (CSI) plugin installed in your cluster. Check the status of the csi-plugin and csi-provisioner components on the Add-ons page under the Storage tab. To upgrade the CSI plugin to use specific features, see Upgrade the CSI plugin. If your cluster still uses FlexVolume, migrate to the CSI plugin first — FlexVolume is deprecated.

  • An existing cloud disk that meets these requirements:

    • Billing method: pay-as-you-go

    • Status: Available

    • Same zone as the ECS node where the pod will run

    • Disk type compatible with the ECS instance type — see Instance families for compatibility details

Important

Disks cannot be mounted across zones. If the disk type is incompatible with the node's instance type, the mount will fail.

Usage notes

  • One disk, one pod at a time. Disks are non-shared storage. Unless multi-attach is enabled, a single disk can only be mounted to one pod at a time. See Use the multi-attach and reservation features of NVMe disks.

  • Zone constraint. A disk can only be mounted to a pod in the same zone. Cross-zone mounting is not supported.

  • Pod rebuild behavior. When a pod is rebuilt, the original disk is remounted. If the pod cannot be scheduled to the original zone due to other constraints, the pod stays in the Pending state.

  • Use StatefulSets or individual pods, not Deployments. When multi-attach is disabled, a disk can be mounted to only one pod. If you must mount a disk to a Deployment, you must set the number of replicas to 1. Even then, you cannot guarantee the priority of mounting and unmounting, and the rolling update strategy may prevent the new pod from mounting the disk during restarts. Therefore, mounting disks to Deployments is not recommended.

  • `securityContext.fsgroup` increases mount time. If you configure securityContext.fsgroup, kubelet runs chmod and chown after mounting, which adds overhead proportional to the number of files. For Kubernetes 1.20 and later, set fsGroupChangePolicy: OnRootMismatch to apply ownership changes only on the first container start. For finer control, use an init container to manage permissions.

Mount a statically provisioned disk volume using kubectl

Step 1: Create a PV

Role: cluster administrator

  1. Connect to your cluster. Use kubectl or CloudShell and Workbench to connect.

  2. Create a file named disk-pv.yaml with the following content, replacing the placeholders with your disk details.

    PlaceholderDescriptionExample
    <YOUR-DISK-ID>ID of your existing diskd-uf628m33r5rsbi******
    <YOUR-DISK-SIZE>Disk capacity20Gi
    <YOUR-DISK-ZONE-ID>Zone where the disk is locatedcn-shanghai-f
    <YOUR-DISK-CATEGORY>Disk type value (see table below)cloud_essd

    Disk type values for `<YOUR-DISK-CATEGORY>`:

    Disk typeValue
    ESSD Entry diskcloud_essd_entry
    ESSD AutoPL diskcloud_auto
    ESSDcloud_essd
    Standard SSDcloud_ssd
    Ultra diskcloud_efficiency
    Zone-redundant diskcloud_regional_disk_auto (requires different nodeAffinity — see the parameter table below)
    apiVersion: v1
    kind: PersistentVolume
    metadata:
      name: "<YOUR-DISK-ID>"
      annotations:
        csi.alibabacloud.com/volume-topology: '{"nodeSelectorTerms":[{"matchExpressions":[{"key":"node.csi.alibabacloud.com/disktype.<YOUR-DISK-CATEGORY>","operator":"In","values":["available"]}]}]}'
    spec:
      capacity:
        storage: "<YOUR-DISK-SIZE>"
      claimRef:
        apiVersion: v1
        kind: PersistentVolumeClaim
        namespace: default
        name: disk-pvc
      accessModes:
        - ReadWriteOnce
      persistentVolumeReclaimPolicy: Retain
      csi:
        driver: diskplugin.csi.alibabacloud.com
        volumeHandle: "<YOUR-DISK-ID>"
      nodeAffinity:
        required:
          nodeSelectorTerms:
          - matchExpressions:
            - key: topology.diskplugin.csi.alibabacloud.com/zone
              operator: In
              values:
              - "<YOUR-DISK-ZONE-ID>"
      storageClassName: alicloud-disk-topology-alltype
      volumeMode: Filesystem

    Key parameters:

    ParameterRequiredDefaultDescription
    csi.alibabacloud.com/volume-topologyRecommendedNoneConstrains pod scheduling to nodes that support the specified disk type. Set <YOUR-DISK-CATEGORY> to avoid scheduling failures caused by disk-instance type incompatibility.
    claimRefOptionalNoneBinds this PV to a specific PVC. Remove this block to allow any compatible PVC to bind to this PV.
    accessModesRequiredMust be ReadWriteOnce — the volume can be mounted as read-write by a single pod.
    persistentVolumeReclaimPolicyRequiredRetainRetain: the PV and disk are kept when the PVC is deleted; you must clean up manually. Delete: the PV and disk are deleted when the PVC is deleted.
    driverRequiredAlways diskplugin.csi.alibabacloud.com for Alibaba Cloud disk volumes.
    nodeAffinityRequiredRestricts pod scheduling to the same zone as the disk. For zone-redundant disks, replace this block with a region-level affinity so the disk can be mounted in any zone within the region: topology.kubernetes.io/region: <YOUR-DISK-REGION-ID> (for example, cn-shanghai).
    storageClassNameRequiredMust match the storageClassName in the PVC exactly. The value alicloud-disk-topology-alltype is used here — you do not need to create this StorageClass in advance for static provisioning.
    Important

    The storageClassName must be identical in both the PV and PVC. A mismatch causes the PVC to remain unbound or bind to a different PV. If your cluster has a default StorageClass, set storageClassName explicitly on both the PV and PVC to prevent the PVC from binding to a dynamically provisioned volume instead of the intended PV.

  3. Create the PV.

    kubectl create -f disk-pv.yaml
  4. Verify that the PV is in the Available state.

    kubectl get pv

    Expected output:

    NAME                     CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM              STORAGECLASS   VOLUMEATTRIBUTESCLASS   REASON   AGE
    d-uf628m33r5rsbi******   20Gi       RWO            Retain           Available   default/disk-pvc   disk           <unset>                          1m36s

Step 2: Create a PVC

Role: developer

  1. Create a file named disk-pvc.yaml with the following content.

    ParameterRequiredDefaultDescription
    accessModesRequiredMust match the PV's accessModes. Only ReadWriteOnce is supported.
    storageRequiredStorage capacity to request. Cannot exceed the disk's actual capacity.
    storageClassNameRequiredMust match the PV's storageClassName exactly — alicloud-disk-topology-alltype in this example.
    volumeNameRecommendedNoneBinds this PVC to a specific PV by name. Remove this field to allow the PVC to bind to any compatible PV.
    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
      name: disk-pvc
    spec:
      accessModes:
      - ReadWriteOnce
      resources:
        requests:
          storage: "<YOUR-DISK-SIZE>"
      storageClassName: alicloud-disk-topology-alltype
      volumeName: "<YOUR-DISK-ID>"
  2. Create the PVC.

    kubectl create -f disk-pvc.yaml
  3. Verify that the PVC is bound to the PV.

    kubectl get pvc

    Expected output showing Bound status:

    NAME       STATUS   VOLUME                   CAPACITY   ACCESS MODES   STORAGECLASS   VOLUMEATTRIBUTESCLASS   AGE
    disk-pvc   Bound    d-uf628m33r5rsbi******   20Gi       RWO            disk           <unset>                 64s

Step 3: Deploy a StatefulSet and mount the disk

Role: developer

  1. Create a file named disk-test.yaml with the following content. This creates a StatefulSet with one pod that mounts the disk-pvc PVC at /data.

    apiVersion: apps/v1
    kind: StatefulSet
    metadata:
      name: disk-test
    spec:
      replicas: 1
      selector:
        matchLabels:
          app: nginx
      template:
        metadata:
          labels:
            app: nginx
        spec:
          containers:
          - name: nginx
            image: anolis-registry.cn-zhangjiakou.cr.aliyuncs.com/openanolis/nginx:1.14.1-8.6
            ports:
            - containerPort: 80
            volumeMounts:
            - name: pvc-disk
              mountPath: /data
          volumes:
            - name: pvc-disk
              persistentVolumeClaim:
                claimName: disk-pvc
  2. Create the StatefulSet.

    kubectl create -f disk-test.yaml
  3. Confirm the pod is running.

    kubectl get pod -l app=nginx

    Expected output:

    NAME          READY   STATUS    RESTARTS   AGE
    disk-test-0   1/1     Running   0          14s
  4. Verify the disk is mounted at /data.

    kubectl exec disk-test-0 -- df -h /data

    Expected output:

    Filesystem      Size  Used Avail Use% Mounted on
    /dev/vdb         20G   24K   20G   1% /data

Mount a statically provisioned disk volume in the console

Step 1: Create a PV

  1. Log on to the ACK console. In the left navigation pane, click Clusters.

  2. On the Clusters page, click the cluster name. In the left navigation pane, choose Volumes > Persistent Volumes.

  3. On the Persistent Volumes page, click Create.

  4. Set the parameters and click Create. After creation, the PV appears on the Persistent Volumes page.

    ParameterRequiredDescriptionExample
    PV typeRequiredSelect Cloud Disk.Cloud Disk
    Access modeRequiredOnly ReadWriteOnce is supported.ReadWriteOnce
    Disk IDRequiredClick Select Disk and select the disk to mount. The disk must be in the same region and zone as the node.d-uf628m33r5rsbi******
    File system typeOptionalFile system used to format the disk. Supported: ext4, ext3, xfs, vfat.ext4 (default)

Step 2: Create a PVC

  1. In the left navigation pane, choose Volumes > Persistent Volume Claims.

  2. On the Persistent Volume Claims page, click Create.

  3. Set the parameters and click OK. After creation, the PVC appears on the Persistent Volume Claims page with Bound status.

    ParameterRequiredDescriptionExample
    PVC typeRequiredSelect Cloud Disk.Cloud Disk
    NameRequiredName for the PVC. Follow the format requirements shown in the console.disk-pvc
    Allocation modeRequiredSelect Existing Volumes.Existing Volumes
    Existing volumesRequiredSelect the PV created in step 1.d-uf690053kttkprgx****, 20Gi
    CapacityRequiredStorage to allocate. Cannot exceed the disk's capacity.20Gi

Step 3: Deploy a StatefulSet and mount the disk

  1. In the left navigation pane, choose Workloads > StatefulSets.

  2. In the upper-right corner, click Create from Image.

  3. Set the key parameters listed below, then click Create from Image. For all other parameters, see Create a StatefulSet.

    Configuration pageParameterRequiredDescriptionExample
    Basic informationNameRequiredName for the StatefulSet. Follow the format requirements shown in the console.disk-test
    Basic informationReplicasRequiredNumber of pods. Set to 1 for disk volumes without multi-attach.1
    ContainerImage nameRequiredContainer image address.anolis-registry.cn-zhangjiakou.cr.aliyuncs.com/openanolis/nginx:1.14.1-8.6
    ContainerRequired resourcesOptionalvCPU, memory, and ephemeral storage for the container.CPU: 0.25 Core; Memory: 512 MiB
    VolumeAdd PVCRequiredClick Add PVC, then set Mount Source to the PVC from step 2 and set Container Path to the mount directory.Mount Source: disk-pvc; Container Path: /data
  4. After deployment, click the StatefulSet name and go to the Pods tab to confirm the pod is in the Running state.

Verify data persistence

When a pod in the StatefulSet is deleted, Kubernetes creates a replacement pod and remounts the original disk. The data on the disk is preserved.

  1. Check the contents of the mounted directory.

    kubectl exec disk-test-0 -- ls /data

    Expected output:

    lost+found
  2. Write a test file to the disk.

    kubectl exec disk-test-0 -- touch /data/test
  3. Delete the pod. The StatefulSet controller automatically creates a replacement.

    kubectl delete pod disk-test-0
  4. Wait for the replacement pod to start, then check its status.

    kubectl get pod -l app=nginx

    Expected output — the new pod has the same name because StatefulSets preserve pod identity:

    NAME          READY   STATUS    RESTARTS   AGE
    disk-test-0   1/1     Running   0          27s
  5. Confirm the data survived the pod deletion.

    kubectl exec disk-test-0 -- ls /data

    Expected output — the test file is still there:

    lost+found
    test

What's next