Mount an existing cloud disk to a pod as a PersistentVolume (PV) for high-throughput, low-latency persistent storage in your ACK cluster.
With static provisioning, you create the PV and PersistentVolumeClaim (PVC) manually before deploying your workload. This gives you full control over which disk is used and keeps the disk independent of the pod lifecycle.
Use cases
Static disk volumes are a good fit when:
Your application requires high disk I/O — for example, running MySQL or Redis.
You need to write logs at high speed.
You have an existing disk that you want to reuse across pod restarts.
For more information about block storage volumes, see Block storage volumes.
Prerequisites
Before you begin, ensure that you have:
The Container Storage Interface (CSI) plugin installed in your cluster. Check the status of the
csi-pluginandcsi-provisionercomponents on the Add-ons page under the Storage tab. To upgrade the CSI plugin to use specific features, see Upgrade the CSI plugin. If your cluster still uses FlexVolume, migrate to the CSI plugin first — FlexVolume is deprecated.An existing cloud disk that meets these requirements:
Billing method: pay-as-you-go
Status: Available
Same zone as the ECS node where the pod will run
Disk type compatible with the ECS instance type — see Instance families for compatibility details
Disks cannot be mounted across zones. If the disk type is incompatible with the node's instance type, the mount will fail.
Usage notes
One disk, one pod at a time. Disks are non-shared storage. Unless multi-attach is enabled, a single disk can only be mounted to one pod at a time. See Use the multi-attach and reservation features of NVMe disks.
Zone constraint. A disk can only be mounted to a pod in the same zone. Cross-zone mounting is not supported.
Pod rebuild behavior. When a pod is rebuilt, the original disk is remounted. If the pod cannot be scheduled to the original zone due to other constraints, the pod stays in the Pending state.
Use StatefulSets or individual pods, not Deployments. When multi-attach is disabled, a disk can be mounted to only one pod. If you must mount a disk to a Deployment, you must set the number of replicas to 1. Even then, you cannot guarantee the priority of mounting and unmounting, and the rolling update strategy may prevent the new pod from mounting the disk during restarts. Therefore, mounting disks to Deployments is not recommended.
`securityContext.fsgroup` increases mount time. If you configure
securityContext.fsgroup, kubelet runschmodandchownafter mounting, which adds overhead proportional to the number of files. For Kubernetes 1.20 and later, setfsGroupChangePolicy: OnRootMismatchto apply ownership changes only on the first container start. For finer control, use an init container to manage permissions.
Mount a statically provisioned disk volume using kubectl
Step 1: Create a PV
Role: cluster administrator
Connect to your cluster. Use kubectl or CloudShell and Workbench to connect.
Create a file named
disk-pv.yamlwith the following content, replacing the placeholders with your disk details.Placeholder Description Example <YOUR-DISK-ID>ID of your existing disk d-uf628m33r5rsbi******<YOUR-DISK-SIZE>Disk capacity 20Gi<YOUR-DISK-ZONE-ID>Zone where the disk is located cn-shanghai-f<YOUR-DISK-CATEGORY>Disk type value (see table below) cloud_essdDisk type values for `<YOUR-DISK-CATEGORY>`:
Disk type Value ESSD Entry disk cloud_essd_entryESSD AutoPL disk cloud_autoESSD cloud_essdStandard SSD cloud_ssdUltra disk cloud_efficiencyZone-redundant disk cloud_regional_disk_auto(requires differentnodeAffinity— see the parameter table below)apiVersion: v1 kind: PersistentVolume metadata: name: "<YOUR-DISK-ID>" annotations: csi.alibabacloud.com/volume-topology: '{"nodeSelectorTerms":[{"matchExpressions":[{"key":"node.csi.alibabacloud.com/disktype.<YOUR-DISK-CATEGORY>","operator":"In","values":["available"]}]}]}' spec: capacity: storage: "<YOUR-DISK-SIZE>" claimRef: apiVersion: v1 kind: PersistentVolumeClaim namespace: default name: disk-pvc accessModes: - ReadWriteOnce persistentVolumeReclaimPolicy: Retain csi: driver: diskplugin.csi.alibabacloud.com volumeHandle: "<YOUR-DISK-ID>" nodeAffinity: required: nodeSelectorTerms: - matchExpressions: - key: topology.diskplugin.csi.alibabacloud.com/zone operator: In values: - "<YOUR-DISK-ZONE-ID>" storageClassName: alicloud-disk-topology-alltype volumeMode: FilesystemKey parameters:
Parameter Required Default Description csi.alibabacloud.com/volume-topologyRecommended None Constrains pod scheduling to nodes that support the specified disk type. Set <YOUR-DISK-CATEGORY>to avoid scheduling failures caused by disk-instance type incompatibility.claimRefOptional None Binds this PV to a specific PVC. Remove this block to allow any compatible PVC to bind to this PV. accessModesRequired — Must be ReadWriteOnce— the volume can be mounted as read-write by a single pod.persistentVolumeReclaimPolicyRequired RetainRetain: the PV and disk are kept when the PVC is deleted; you must clean up manually.Delete: the PV and disk are deleted when the PVC is deleted.driverRequired — Always diskplugin.csi.alibabacloud.comfor Alibaba Cloud disk volumes.nodeAffinityRequired — Restricts pod scheduling to the same zone as the disk. For zone-redundant disks, replace this block with a region-level affinity so the disk can be mounted in any zone within the region: topology.kubernetes.io/region: <YOUR-DISK-REGION-ID>(for example,cn-shanghai).storageClassNameRequired — Must match the storageClassNamein the PVC exactly. The valuealicloud-disk-topology-alltypeis used here — you do not need to create this StorageClass in advance for static provisioning.ImportantThe
storageClassNamemust be identical in both the PV and PVC. A mismatch causes the PVC to remain unbound or bind to a different PV. If your cluster has a default StorageClass, setstorageClassNameexplicitly on both the PV and PVC to prevent the PVC from binding to a dynamically provisioned volume instead of the intended PV.Create the PV.
kubectl create -f disk-pv.yamlVerify that the PV is in the
Availablestate.kubectl get pvExpected output:
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS VOLUMEATTRIBUTESCLASS REASON AGE d-uf628m33r5rsbi****** 20Gi RWO Retain Available default/disk-pvc disk <unset> 1m36s
Step 2: Create a PVC
Role: developer
Create a file named
disk-pvc.yamlwith the following content.Parameter Required Default Description accessModesRequired — Must match the PV's accessModes. OnlyReadWriteOnceis supported.storageRequired — Storage capacity to request. Cannot exceed the disk's actual capacity. storageClassNameRequired — Must match the PV's storageClassNameexactly —alicloud-disk-topology-alltypein this example.volumeNameRecommended None Binds this PVC to a specific PV by name. Remove this field to allow the PVC to bind to any compatible PV. apiVersion: v1 kind: PersistentVolumeClaim metadata: name: disk-pvc spec: accessModes: - ReadWriteOnce resources: requests: storage: "<YOUR-DISK-SIZE>" storageClassName: alicloud-disk-topology-alltype volumeName: "<YOUR-DISK-ID>"Create the PVC.
kubectl create -f disk-pvc.yamlVerify that the PVC is bound to the PV.
kubectl get pvcExpected output showing
Boundstatus:NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS VOLUMEATTRIBUTESCLASS AGE disk-pvc Bound d-uf628m33r5rsbi****** 20Gi RWO disk <unset> 64s
Step 3: Deploy a StatefulSet and mount the disk
Role: developer
Create a file named
disk-test.yamlwith the following content. This creates a StatefulSet with one pod that mounts thedisk-pvcPVC at/data.apiVersion: apps/v1 kind: StatefulSet metadata: name: disk-test spec: replicas: 1 selector: matchLabels: app: nginx template: metadata: labels: app: nginx spec: containers: - name: nginx image: anolis-registry.cn-zhangjiakou.cr.aliyuncs.com/openanolis/nginx:1.14.1-8.6 ports: - containerPort: 80 volumeMounts: - name: pvc-disk mountPath: /data volumes: - name: pvc-disk persistentVolumeClaim: claimName: disk-pvcCreate the StatefulSet.
kubectl create -f disk-test.yamlConfirm the pod is running.
kubectl get pod -l app=nginxExpected output:
NAME READY STATUS RESTARTS AGE disk-test-0 1/1 Running 0 14sVerify the disk is mounted at
/data.kubectl exec disk-test-0 -- df -h /dataExpected output:
Filesystem Size Used Avail Use% Mounted on /dev/vdb 20G 24K 20G 1% /data
Mount a statically provisioned disk volume in the console
Step 1: Create a PV
Log on to the ACK console. In the left navigation pane, click Clusters.
On the Clusters page, click the cluster name. In the left navigation pane, choose Volumes > Persistent Volumes.
On the Persistent Volumes page, click Create.
Set the parameters and click Create. After creation, the PV appears on the Persistent Volumes page.
Parameter Required Description Example PV type Required Select Cloud Disk. Cloud Disk Access mode Required Only ReadWriteOnce is supported. ReadWriteOnce Disk ID Required Click Select Disk and select the disk to mount. The disk must be in the same region and zone as the node. d-uf628m33r5rsbi******File system type Optional File system used to format the disk. Supported: ext4, ext3, xfs, vfat. ext4 (default)
Step 2: Create a PVC
In the left navigation pane, choose Volumes > Persistent Volume Claims.
On the Persistent Volume Claims page, click Create.
Set the parameters and click OK. After creation, the PVC appears on the Persistent Volume Claims page with Bound status.
Parameter Required Description Example PVC type Required Select Cloud Disk. Cloud Disk Name Required Name for the PVC. Follow the format requirements shown in the console. disk-pvcAllocation mode Required Select Existing Volumes. Existing Volumes Existing volumes Required Select the PV created in step 1. d-uf690053kttkprgx****, 20GiCapacity Required Storage to allocate. Cannot exceed the disk's capacity. 20Gi
Step 3: Deploy a StatefulSet and mount the disk
In the left navigation pane, choose Workloads > StatefulSets.
In the upper-right corner, click Create from Image.
Set the key parameters listed below, then click Create from Image. For all other parameters, see Create a StatefulSet.
Configuration page Parameter Required Description Example Basic information Name Required Name for the StatefulSet. Follow the format requirements shown in the console. disk-testBasic information Replicas Required Number of pods. Set to 1 for disk volumes without multi-attach. 1Container Image name Required Container image address. anolis-registry.cn-zhangjiakou.cr.aliyuncs.com/openanolis/nginx:1.14.1-8.6Container Required resources Optional vCPU, memory, and ephemeral storage for the container. CPU: 0.25 Core; Memory: 512 MiB Volume Add PVC Required Click Add PVC, then set Mount Source to the PVC from step 2 and set Container Path to the mount directory. Mount Source: disk-pvc; Container Path:/dataAfter deployment, click the StatefulSet name and go to the Pods tab to confirm the pod is in the Running state.
Verify data persistence
When a pod in the StatefulSet is deleted, Kubernetes creates a replacement pod and remounts the original disk. The data on the disk is preserved.
Check the contents of the mounted directory.
kubectl exec disk-test-0 -- ls /dataExpected output:
lost+foundWrite a test file to the disk.
kubectl exec disk-test-0 -- touch /data/testDelete the pod. The StatefulSet controller automatically creates a replacement.
kubectl delete pod disk-test-0Wait for the replacement pod to start, then check its status.
kubectl get pod -l app=nginxExpected output — the new pod has the same name because StatefulSets preserve pod identity:
NAME READY STATUS RESTARTS AGE disk-test-0 1/1 Running 0 27sConfirm the data survived the pod deletion.
kubectl exec disk-test-0 -- ls /dataExpected output — the
testfile is still there:lost+found test
What's next
If you run into issues with disk volumes, see FAQ about disk volumes.
If the disk is full or undersized, see Expand disk persistent volumes.
To monitor disk usage, see Overview of container storage monitoring.