All Products
Search
Document Center

Container Service for Kubernetes:Use disks as ephemeral volumes

Last Updated:Mar 26, 2026

If your application needs temporary disk storage—scratch space, caching, or high-throughput log output—you can mount an ephemeral volume to each Pod. Unlike persistent volumes (PVs), ephemeral volumes are tied to their Pod's lifecycle: they are created automatically when the Pod starts and deleted when the Pod is removed. This eliminates manual cleanup and simplifies deployment for stateless workloads.

This topic shows how to mount a cloud disk as an ephemeral volume using ephemeral.volumeClaimTemplate, and how to verify that the PV and persistent volume claim (PVC) are automatically cleaned up when a Pod is deleted.

When to use ephemeral volumes

Ephemeral volumes are a good fit when your application needs temporary storage but doesn't require data to persist between restarts:

  • Scratch space and intermediate data: store temporary results from data processing pipelines without manual cleanup.

  • High-throughput log output: use dedicated, non-shared disk storage per Pod to avoid I/O contention.

  • Caching: keep short-lived cache data close to the application without the overhead of managing persistent storage.

Ephemeral volumes backed by volumeClaimTemplate differ from emptyDir in two key ways: they support network-attached storage (cloud disks), and you can specify a fixed capacity that the Pod cannot exceed.

How it works

When you define an ephemeral.volumeClaimTemplate in a Deployment, Pod, or StatefulSet, Kubernetes creates a PVC in the same namespace as the Pod. The Pod owns that PVC. When the Pod is deleted—whether by scaling in, a rolling update, or manual deletion—the Kubernetes garbage collector deletes the PVC, which in turn triggers deletion of the underlying PV and cloud disk.

PVC naming: Kubernetes names each PVC automatically using the pattern <pod-name>-<volume-name>. For example, a Pod named ephemeral-example-7f795798f9-kbplx with a volume named scratch-volume produces a PVC named ephemeral-example-7f795798f9-kbplx-scratch-volume. Because naming is deterministic, you can locate a PVC by combining the Pod name and volume name without searching.

Two Pods can produce the same PVC name if their names and volume names overlap (for example, Pod pod-a with volume scratch and Pod pod with volume a-scratch both yield PVC pod-a-scratch). Avoid naming conflicts when you customize Pod and volume names.

Prerequisites

Before you begin, ensure that you have:

  • A Container Service for Kubernetes (ACK) cluster running version 1.22 or later

Create a Deployment and mount an ephemeral volume

A volumeClaimTemplate defines the configuration for PVCs. When you deploy a Deployment with two replicas, Kubernetes creates two PVCs—one per Pod—from the same template. The PVCs share the same configuration but have distinct names.

ephemeral.volumeClaimTemplate works with Deployments, StatefulSets, and Pods. This topic uses a Deployment as an example.
  1. Save the following content as ephemeral-example.yaml. Adjust the parameters in volumeClaimTemplate based on the descriptions in the table below.

    Parameter Description
    accessModes The access mode for the PV. Set to ReadWriteOncePod (for clusters running version 1.29 or later) to ensure a disk is used by only one Pod at a time. For earlier cluster versions, use ReadWriteOnce.
    storageClassName The StorageClass to use. This example uses alicloud-disk-topology-alltype, an ACK default StorageClass that provisions disks in the following order: ESSD (enterprise SSD), standard SSD, then ultra disk. Disks are billed on a pay-as-you-go basis. For pricing details, see Elastic Block Storage billing and Elastic Block Storage pricing.
    storage The capacity of the ephemeral volume. The alicloud-disk-topology-alltype StorageClass provisions a PL1 ESSD by default. The minimum capacity is 20 GiB.
    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: ephemeral-example
    spec:
      replicas: 2
      selector:
        matchLabels:
          pod: example-pod
      strategy:
        type: Recreate
      template:
        metadata:
          labels:
            pod: example-pod
        spec:
          containers:
            - name: nginx
              image: anolis-registry.cn-zhangjiakou.cr.aliyuncs.com/openanolis/nginx:1.14.1-8.6
              resources:
                requests:
                  cpu: 500m
                  memory: 2Gi
                  ephemeral-storage: 2Gi
              volumeMounts:
              - mountPath: "/scratch"
                name: scratch-volume
          volumes:
            - name: scratch-volume
              ephemeral:      # Declare the current storage as ephemeral storage
                volumeClaimTemplate:
                  spec:
                    accessModes: [ "ReadWriteOncePod" ]
                    storageClassName: alicloud-disk-topology-alltype
                    resources:
                      requests:
                        storage: 30Gi
  2. Create the Deployment.

    kubectl create -f ephemeral-example.yaml
  3. Verify that both Pods are running.

    kubectl get pod -l pod=example-pod

    Expected output:

    NAME                                 READY   STATUS    RESTARTS   AGE
    ephemeral-example-7f795798f9-kbplx   1/1     Running   0          38s
    ephemeral-example-7f795798f9-p98lt   1/1     Running   0          38s
  4. Verify that the PVCs and their bound cloud disks were created automatically.

    kubectl get pvc

    Expected output (the disk ID appears in the VOLUME field):

    NAME                                                STATUS   VOLUME                   CAPACITY   ACCESS MODES   STORAGECLASS                     VOLUMEATTRIBUTESCLASS   AGE
    ephemeral-example-7f795798f9-kbplx-scratch-volume   Bound    d-uf61678cuo33eunn****   30Gi       RWOP           alicloud-disk-topology-alltype   <unset>                 74s
    ephemeral-example-7f795798f9-p98lt-scratch-volume   Bound    d-uf6dwkdcowyf2fj6****   30Gi       RWOP           alicloud-disk-topology-alltype   <unset>                 74s

Verify that PVs and PVCs are deleted when Pods are removed

When a Pod is deleted, Kubernetes garbage-collects its PVC, which triggers deletion of the bound PV and cloud disk. The following steps demonstrate this with a scale-in.

  1. Scale the Deployment down to one replica.

    kubectl scale deploy ephemeral-example --replicas=1
  2. Confirm that only one Pod remains.

    kubectl get pod -l pod=example-pod

    Expected output:

    NAME                                 READY   STATUS    RESTARTS   AGE
    ephemeral-example-7f795798f9-kbplx   1/1     Running   0          5m29s
  3. Confirm that the PV for the deleted Pod was removed.

    kubectl get pv

    Expected output (only one PV remains, bound to the surviving Pod):

    NAME                     CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                                                       STORAGECLASS                     VOLUMEATTRIBUTESCLASS   REASON   AGE
    d-uf61678cuo33eunn****   30Gi       RWOP           Delete           Bound    default/ephemeral-example-7f795798f9-kbplx-scratch-volume   alicloud-disk-topology-alltype   <unset>                          5m52s
  4. Confirm that the PVC for the deleted Pod was removed.

    kubectl get pvc

    Expected output (only one PVC remains):

    NAME                                                STATUS   VOLUME                   CAPACITY   ACCESS MODES   STORAGECLASS                     VOLUMEATTRIBUTESCLASS   AGE
    ephemeral-example-7f795798f9-kbplx-scratch-volume   Bound    d-uf61678cuo33eunn****   30Gi       RWOP           alicloud-disk-topology-alltype   <unset>                 7m11s

What's next