For stateful applications such as databases and message queues, a Kubernetes StatefulSet can use the volumeClaimTemplates field to dynamically create and attach a dedicated Persistent Volume Claim (PVC) to each pod. This PVC then binds to an independent Persistent Volume (PV). When a pod is recreated or rescheduled, the PVC automatically remounts its original PV to ensure data persistence and service continuity.
The following is a sample volumeClaimTemplates configuration:
apiVersion: apps/v1
kind: StatefulSet
# ...
spec:
# ...
volumeClaimTemplates:
- metadata:
name: data-volume # Name of the PVC template
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: "alicloud-disk-essd" # Specify the storage type
resources:
requests:
storage: 20Gi # The requested storage capacityHow it works
Creation and scale-out
During initial creation or scale-out, the StatefulSet controller uses the
volumeClaimTemplatesto create and bind a uniquely named PVC for each pod replica. The PVCs are named following the pattern[template-name]-[pod-name]. For example, if the template is nameddata-volume, the controller will create PVCs nameddata-volume-web-0anddata-volume-web-1for the podsweb-0andweb-1respectively, creating a stable mapping between a pod and its storage.Based on the parameters in the template (such as
storageClassName,storage, andaccessModes), the Container Storage Interface (CSI) then automatically creates a matching PV with the correct type, size, and access mode, then binds and mounts the PV.Scale-in
When a StatefulSet is scaled in, the controller only deletes the pod itself. The associated PVC and underlying PV are retained to protect the data.
Rescaling and fault recovery
When you scale out again (increase the replica count) or during fault recovery (a pod is deleted then recreated), the controller automatically finds and reuses the previously retained PVC with the same name.
If the PVC exists, the new pod with the same name will automatically mount the existing PV, allowing for the rapid recovery of its state and data.
If the PVC does not exist, for example, if the scale-out operation exceeds the historical peak replica count, a new PVC and a corresponding PV will be created.
Step 1: Deploy a StatefulSet with persistent storage
This example deploys a Service and a StatefulSet with two replicas. The StatefulSet uses volumeClaimTemplates to automatically create a 20 GiB cloud disk for each replica.
Create a file named
statefulset.yaml.The following table describes the parameters in
volumeClaimTemplates:Parameter
Description
accessModesThe access mode of the volume.
ReadWriteOncemeans the volume can be mounted as read-write by a single node at a time.storageClassNameThe name of the StorageClass to use.
alicloud-disk-essdis a default StorageClass provided by Container Service for Kubernetes (ACK) for creating enterprise SSDs (ESSDs) with a default performance level (PL) of PL1.These disks use pay-as-you-go billing. For more information, see Billing of block storage and Prices of block storage.
storageThe capacity of the disk volume.
Deploy the StatefulSet.
kubectl create -f statefulset.yamlVerify that pods are running.
kubectl get pod -l app=nginxView the PVCs to confirm that the system has automatically created and bound a corresponding PVC for each pod.
kubectl get pvcExpected output:
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS VOLUMEATTRIBUTESCLASS AGE disk-essd-web-0 Bound d-m5eb5ozeseslnz7zq54b 20Gi RWO alicloud-disk-essd <unset> 3m31s disk-essd-web-1 Bound d-m5ecrvjrhqwehgzqpk5i 20Gi RWO alicloud-disk-essd <unset> 48s
Step 2: Validate the storage lifecycle
Observe the creation, retention, and reuse of associated PVCs by scaling out, scaling in, then scaling out again.
Scale out the application
Increase the number of StatefulSet replicas to 3.
kubectl scale sts web --replicas=3Verify the pods are running.
kubectl get pod -l app=nginxView the PVCs to confirm that the system has automatically created the pod
web-2and its corresponding PVCdisk-essd-web-2.kubectl get pvcExpected output:
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS VOLUMEATTRIBUTESCLASS AGE disk-essd-web-0 Bound d-m5eb5ozeseslnz7zq54b 20Gi RWO alicloud-disk-essd <unset> 4m1s disk-essd-web-1 Bound d-m5ecrvjrhqwehgzqpk5i 20Gi RWO alicloud-disk-essd <unset> 78s disk-essd-web-2 Bound d-m5ee2cvzx4dog1lounjn 20Gi RWO alicloud-disk-essd <unset> 16s
Scale in the application
Decrease the number of StatefulSet replicas to 2.
kubectl scale sts web --replicas=2Verify that the pods are running.
kubectl get pod -l app=nginxView the PVCs.
kubectl get pvcExpected output:
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS VOLUMEATTRIBUTESCLASS AGE disk-essd-web-0 Bound d-m5eb5ozeseslnz7zq54b 20Gi RWO alicloud-disk-essd <unset> 4m21s disk-essd-web-1 Bound d-m5ecrvjrhqwehgzqpk5i 20Gi RWO alicloud-disk-essd <unset> 98s disk-essd-web-2 Bound d-m5ee2cvzx4dog1lounjn 20Gi RWO alicloud-disk-essd <unset> 36sAt this point, the pod
web-2has been deleted, but the PVCdisk-essd-web-2still exists to ensure data persistence.
Scale out the application again
Increase the number of StatefulSet replicas back to 3.
kubectl scale sts web --replicas=3Verify that the pods are running.
kubectl get pod -l app=nginxView the PVCs.
kubectl get pvcExpected output:
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS VOLUMEATTRIBUTESCLASS AGE disk-essd-web-0 Bound d-m5eb5ozeseslnz7zq54b 20Gi RWO alicloud-disk-essd <unset> 4m50s disk-essd-web-1 Bound d-m5ecrvjrhqwehgzqpk5i 20Gi RWO alicloud-disk-essd <unset> 2m7s disk-essd-web-2 Bound d-m5ee2cvzx4dog1lounjn 20Gi RWO alicloud-disk-essd <unset> 65sThe newly created pod
web-2has automatically bound to and is using the previously retained PVCdisk-essd-web-2.
Step 3: Validate data persistence after a pod failure
Verify that data stored on the disk persists after a pod is recreated by writing data, deleting the pod, then checking for the data.
Write test data to the pod.
Using pod
web-1as an example, create atestfile in the mounted disk path/data.kubectl exec web-1 -- touch /data/test kubectl exec web-1 -- ls /dataExpected output:
lost+found testSimulate a pod failure by deleting the pod.
kubectl delete pod web-1Run
kubectl get pod -l app=nginxagain, you will see that a new pod namedweb-1is automatically created.Verify the data in the new pod.
Check the
/datadirectory in the newweb-1pod.kubectl exec web-1 -- ls /dataThe
testfile you created still exists. This confirms that data persists even if the pod is deleted and recreated.lost+found test
Application in production
Cost and resource management: When you scale in or delete a StatefulSet, the associated PVCs and disks are retained by default. These retained resources continue to incur fees. Be sure to manually clean up any unused PVCs and PVs to avoid unnecessary charges.
Data security and backup: Persistent storage ensures high availability during pod failures, but it is not a data backup solution. For critical data, use the backup center to perform regular backups.
High availability and disaster recovery: Disks are zonal resources and cannot be mounted across zones. For cross-zone disaster recovery, use a disk type that supports cross-zone data replication, such as regional ESSDs.
References
See Disk volume FAQ for troubleshooting.