All Products
Search
Document Center

Container Service for Kubernetes:Configure cloud disk persistent storage for StatefulSets

Last Updated:Sep 26, 2025

For stateful applications such as databases and message queues, a Kubernetes StatefulSet can use the volumeClaimTemplates field to dynamically create and attach a dedicated Persistent Volume Claim (PVC) to each pod. This PVC then binds to an independent Persistent Volume (PV). When a pod is recreated or rescheduled, the PVC automatically remounts its original PV to ensure data persistence and service continuity.

The following is a sample volumeClaimTemplates configuration:

apiVersion: apps/v1
kind: StatefulSet
# ...
spec:
  # ...
  volumeClaimTemplates:
  - metadata:
      name: data-volume                        # Name of the PVC template
    spec:
      accessModes: [ "ReadWriteOnce" ]
      storageClassName: "alicloud-disk-essd"   # Specify the storage type
      resources:
        requests:
          storage: 20Gi                        # The requested storage capacity

How it works

  • Creation and scale-out

    During initial creation or scale-out, the StatefulSet controller uses the volumeClaimTemplates to create and bind a uniquely named PVC for each pod replica. The PVCs are named following the pattern [template-name]-[pod-name]. For example, if the template is named data-volume, the controller will create PVCs named data-volume-web-0 and data-volume-web-1 for the pods web-0 and web-1 respectively, creating a stable mapping between a pod and its storage.

    Based on the parameters in the template (such as storageClassName, storage, and accessModes), the Container Storage Interface (CSI) then automatically creates a matching PV with the correct type, size, and access mode, then binds and mounts the PV.

  • Scale-in

    When a StatefulSet is scaled in, the controller only deletes the pod itself. The associated PVC and underlying PV are retained to protect the data.

  • Rescaling and fault recovery

    When you scale out again (increase the replica count) or during fault recovery (a pod is deleted then recreated), the controller automatically finds and reuses the previously retained PVC with the same name.

    • If the PVC exists, the new pod with the same name will automatically mount the existing PV, allowing for the rapid recovery of its state and data.

    • If the PVC does not exist, for example, if the scale-out operation exceeds the historical peak replica count, a new PVC and a corresponding PV will be created.

Step 1: Deploy a StatefulSet with persistent storage

This example deploys a Service and a StatefulSet with two replicas. The StatefulSet uses volumeClaimTemplates to automatically create a 20 GiB cloud disk for each replica.

  1. Create a file named statefulset.yaml.

    The following table describes the parameters in volumeClaimTemplates:

    Parameter

    Description

    accessModes

    The access mode of the volume. ReadWriteOnce means the volume can be mounted as read-write by a single node at a time.

    storageClassName

    The name of the StorageClass to use.

    alicloud-disk-essd is a default StorageClass provided by Container Service for Kubernetes (ACK) for creating enterprise SSDs (ESSDs) with a default performance level (PL) of PL1.

    These disks use pay-as-you-go billing. For more information, see Billing of block storage and Prices of block storage.

    storage

    The capacity of the disk volume.

    YAML template

    apiVersion: v1
    kind: Service
    metadata:
      name: nginx
      labels:
        app: nginx
    spec:
      ports:
      - port: 80
        name: web
      # Set clusterIP to "None" to indicate a Headless Service
      clusterIP: None
      selector:
        app: nginx
    ---
    apiVersion: apps/v1
    kind: StatefulSet
    metadata:
      name: web
    spec:
      selector:
        matchLabels:
          app: nginx
      # The serviceName must match the name of the Headless Service defined above
      serviceName: "nginx"
      replicas: 2
      template:
        metadata:
          labels:
            app: nginx
        spec:
          containers:
          - name: nginx
            image: anolis-registry.cn-zhangjiakou.cr.aliyuncs.com/openanolis/nginx:1.14.1-8.6
            ports:
            - containerPort: 80
              name: web
            # Mount the PV to a specified path in the container
            volumeMounts:
            - name: disk-essd
              mountPath: /data
      # PVC template used by the StatefulSet to create a PVC for each pod
      volumeClaimTemplates:
      - metadata:
          name: disk-essd
        spec:
          # Define the volume access mode
          accessModes: [ "ReadWriteOnce" ]
          # Specify the StorageClass for dynamic PV provisioning
          storageClassName: "alicloud-disk-essd"
          resources:
            requests:
              # Define the requested storage capacity for each PVC
              storage: 20Gi
  2. Deploy the StatefulSet.

    kubectl create -f statefulset.yaml
  3. Verify that pods are running.

    kubectl get pod -l app=nginx
  4. View the PVCs to confirm that the system has automatically created and bound a corresponding PVC for each pod.

    kubectl get pvc

    Expected output:

    NAME              STATUS   VOLUME                   CAPACITY   ACCESS MODES   STORAGECLASS         VOLUMEATTRIBUTESCLASS   AGE
    disk-essd-web-0   Bound    d-m5eb5ozeseslnz7zq54b   20Gi       RWO            alicloud-disk-essd   <unset>                 3m31s
    disk-essd-web-1   Bound    d-m5ecrvjrhqwehgzqpk5i   20Gi       RWO            alicloud-disk-essd   <unset>                 48s

Step 2: Validate the storage lifecycle

Observe the creation, retention, and reuse of associated PVCs by scaling out, scaling in, then scaling out again.

Scale out the application

  1. Increase the number of StatefulSet replicas to 3.

    kubectl scale sts web --replicas=3
  2. Verify the pods are running.

    kubectl get pod -l app=nginx
  3. View the PVCs to confirm that the system has automatically created the pod web-2 and its corresponding PVC disk-essd-web-2.

    kubectl get pvc

    Expected output:

    NAME              STATUS   VOLUME                   CAPACITY   ACCESS MODES   STORAGECLASS         VOLUMEATTRIBUTESCLASS   AGE
    disk-essd-web-0   Bound    d-m5eb5ozeseslnz7zq54b   20Gi       RWO            alicloud-disk-essd   <unset>                 4m1s
    disk-essd-web-1   Bound    d-m5ecrvjrhqwehgzqpk5i   20Gi       RWO            alicloud-disk-essd   <unset>                 78s
    disk-essd-web-2   Bound    d-m5ee2cvzx4dog1lounjn   20Gi       RWO            alicloud-disk-essd   <unset>                 16s

Scale in the application

  1. Decrease the number of StatefulSet replicas to 2.

    kubectl scale sts web --replicas=2
  2. Verify that the pods are running.

    kubectl get pod -l app=nginx
  3. View the PVCs.

    kubectl get pvc

    Expected output:

    NAME              STATUS   VOLUME                   CAPACITY   ACCESS MODES   STORAGECLASS         VOLUMEATTRIBUTESCLASS   AGE
    disk-essd-web-0   Bound    d-m5eb5ozeseslnz7zq54b   20Gi       RWO            alicloud-disk-essd   <unset>                 4m21s
    disk-essd-web-1   Bound    d-m5ecrvjrhqwehgzqpk5i   20Gi       RWO            alicloud-disk-essd   <unset>                 98s
    disk-essd-web-2   Bound    d-m5ee2cvzx4dog1lounjn   20Gi       RWO            alicloud-disk-essd   <unset>                 36s

    At this point, the pod web-2 has been deleted, but the PVC disk-essd-web-2 still exists to ensure data persistence.

Scale out the application again

  1. Increase the number of StatefulSet replicas back to 3.

    kubectl scale sts web --replicas=3
  2. Verify that the pods are running.

    kubectl get pod -l app=nginx
  3. View the PVCs.

    kubectl get pvc

    Expected output:

    NAME              STATUS   VOLUME                   CAPACITY   ACCESS MODES   STORAGECLASS         VOLUMEATTRIBUTESCLASS   AGE
    disk-essd-web-0   Bound    d-m5eb5ozeseslnz7zq54b   20Gi       RWO            alicloud-disk-essd   <unset>                 4m50s
    disk-essd-web-1   Bound    d-m5ecrvjrhqwehgzqpk5i   20Gi       RWO            alicloud-disk-essd   <unset>                 2m7s
    disk-essd-web-2   Bound    d-m5ee2cvzx4dog1lounjn   20Gi       RWO            alicloud-disk-essd   <unset>                 65s

    The newly created pod web-2 has automatically bound to and is using the previously retained PVC disk-essd-web-2.

Step 3: Validate data persistence after a pod failure

Verify that data stored on the disk persists after a pod is recreated by writing data, deleting the pod, then checking for the data.

  1. Write test data to the pod.

    Using pod web-1 as an example, create a test file in the mounted disk path /data.

    kubectl exec web-1 -- touch /data/test
    kubectl exec web-1 -- ls /data

    Expected output:

    lost+found
    test
  2. Simulate a pod failure by deleting the pod.

    kubectl delete pod web-1

    Run kubectl get pod -l app=nginx again, you will see that a new pod named web-1 is automatically created.

  3. Verify the data in the new pod.

    Check the /data directory in the new web-1 pod.

    kubectl exec web-1 -- ls /data

    The test file you created still exists. This confirms that data persists even if the pod is deleted and recreated.

    lost+found
    test

Application in production

  • Cost and resource management: When you scale in or delete a StatefulSet, the associated PVCs and disks are retained by default. These retained resources continue to incur fees. Be sure to manually clean up any unused PVCs and PVs to avoid unnecessary charges.

  • Data security and backup: Persistent storage ensures high availability during pod failures, but it is not a data backup solution. For critical data, use the backup center to perform regular backups.

  • High availability and disaster recovery: Disks are zonal resources and cannot be mounted across zones. For cross-zone disaster recovery, use a disk type that supports cross-zone data replication, such as regional ESSDs.

References

See Disk volume FAQ for troubleshooting.