All Products
Search
Document Center

Container Service for Kubernetes:Use volumes

Last Updated:Dec 24, 2025

To facilitate data or status sharing across multiple workflow steps, you can mount volumes in the cluster. Supported volume types include Object Storage Service (OSS) and Apsara File Storage NAS (NAS). This topic describes how to create statically provisioned volumes and specifies a sample workflow to use it.

Usage notes

OSS and NAS volumes are suitable for different use scenarios. The following table describes the details:

Volume

Scenarios

OSS volumes

  • Data sharing

    OSS is a shared storage type. You can access data on OSS volumes from multiple pods at the same time. The data on OSS volumes is not deleted when the pod is deleted. OSS volumes can be used to share data between pods.

  • Read-only configuration files of websites and applications

    ossfs provides limited network performance and can be used to read small files.

  • Read-only media files, such as images and audio and video files

    OSS is suitable for storing unstructured data, such as images, audios, and videos.

NAS volumes

  • Data sharing

    NAS file systems allow multiple pods to access the same data. We recommend that you use NAS file systems if data needs to be shared.

  • Big data analysis

    NAS file systems provide high throughput and meet the requirement of shared storage access when large numbers of jobs are involved.

  • Web applications

    NAS file systems can provision storage for web applications and content management systems.

  • Log storage

    We recommend that you use NAS volumes to store log data.

For more information, see NAS volumes.

CPFS volumes

  • CPFS delivers the high throughput required for demanding workloads such as genomic computing and big data analytics, meeting the exceptional performance needs of large-scale clusters.

  • It can also be used as a high-speed cache, allowing you to stage data from slower storage tiers onto a CPFS volume for faster access by your applications.

For details, see Use a statically provisioned volume of CPFS General-purpose Edition and Use CNFS to manage isolated CPFS volumes.

Each volume type has specific limitations and guidelines, and they may also incur associated resource costs. For more information, see the following topics:

Use OSS volumes

  1. Use the following sample code to create an OSS volume.

    For more information, see Use an ossfs 1.0 statically provisioned volume.

    apiVersion: v1
    kind: Secret
    metadata:
      name: oss-secret
      namespace: argo
    stringData:
      akId: <yourAccessKey ID> # Replace with the actual AccessKey ID.
      akSecret: <yourAccessKey Secret> # Replace with the actual AccessKey Secret.
    ---
    apiVersion: v1
    kind: PersistentVolume
    metadata:
      name: pv-oss
      namespace: argo
      labels:
        alicloud-pvname: pv-oss
    spec:
      capacity:
        storage: 5Gi
      accessModes:
        - ReadWriteMany
      persistentVolumeReclaimPolicy: Retain
      csi:
        driver: ossplugin.csi.alibabacloud.com
        volumeHandle: pv-oss   # Specify the name of the persistent volume (PV).
        nodePublishSecretRef:
          name: oss-secret
          namespace: argo
        volumeAttributes:
          bucket: <your bucket name> # Replace with the actual bucket name.
          url: "oss-<your region id>-internal.aliyuncs.com" # Replace <your region id> with the region ID of OSS. For example, the region ID of China (Beijing) is oss-cn-beijing-internal.aliyuncs.com.
          otherOpts: "-o max_stat_cache_size=0 -o allow_other -o multipart_size=30 -o parallel_count=20" # -o max_stat_cache_size=0
          path: "/"  # Mount the root directory of the bucket. You can also set this parameter to mount a subdirectory under the bucket, such as path: "testdir/testdir1".
    ---
    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
      name: pvc-oss
      namespace: argo
    spec:
      accessModes:
        - ReadWriteMany
      resources:
        requests:
          storage: 5Gi
      selector:
        matchLabels:
          alicloud-pvname: pv-oss

    Optional parameter

    You can configure custom parameters in the -o *** -o *** format for the OSS volume. Example: -o umask=022 -o max_stat_cache_size=0 -o allow_other.

    Parameter

    Description

    umask

    Modifies the permissions for ossfs accessing files. For example, if you set umask=022, the permissions for ossfs files will change to 755. Files uploaded through OSS SDK or console have a default permission of 640 in ossfs. Therefore, we recommend that you specify the umask parameter if you want to split the read and write permissions.

    max_stat_cache_size

    The maximum number of files whose metadata can be cached. Metadata caching can accelerate ls operations. However, if you modify files by using methods such as OSS SDKs, console, and ossutil, the cached metadata of the files is not synchronously updated. As a result, the cached metadata becomes outdated, and the results of ls operations may be inaccurate.

    allow_other

    Allows other users to access the mounted directory. However, these users cannot access the files in the directory.

    For more information about other parameters, see Mount options.

  2. Use the following YAML template to create a workflow that uses a volume:

    apiVersion: argoproj.io/v1alpha1
    kind: Workflow
    metadata:
      generateName: volumes-existing-
      namespace: argo
    spec:
      entrypoint: volumes-existing-example
      volumes:
      # Pass the existing volume workdir to the volumes-existing-example template.
      # The syntax follows the same structure as Kubernetes pod specifications.
      - name: workdir
        persistentVolumeClaim:
          claimName: pvc-oss
    
      templates:
      - name: volumes-existing-example
        steps:
        - - name: generate
            template: whalesay
        - - name: print
            template: print-message
    
      - name: whalesay
        container:
          image: mirrors-ssl.aliyuncs.com/busybox:latest
          command: [sh, -c]
          args: ["echo generating message in volume; echo hello world | tee /mnt/vol/hello_world.txt"]
          volumeMounts:
          - name: workdir
            mountPath: /mnt/vol
    
      - name: print-message
        container:
          image: mirrors-ssl.aliyuncs.com/alpine:latest
          command: [sh, -c]
          args: ["echo getting message from volume; find /mnt/vol; cat /mnt/vol/hello_world.txt"]
          volumeMounts:
          - name: workdir
            mountPath: /mnt/vol

Use NAS volumes

  1. Use the following YAML template to create a NAS volume.

    For more information, see Mount a statically provisioned NAS volume.

    apiVersion: v1
    kind: PersistentVolume
    metadata:
      name: pv-nas
      namespace: argo
      labels:
        alicloud-pvname: pv-nas
    spec:
      capacity:
        storage: 100Gi
      accessModes:
        - ReadWriteMany
      csi:
        driver: nasplugin.csi.alibabacloud.com
        volumeHandle: pv-nas   # Specify the name of the PV.
        volumeAttributes:
          server: "<your nas filesystem id>.cn-beijing.nas.aliyuncs.com"
          path: "/"
      mountOptions:
      - nolock,tcp,noresvport
      - vers=3
    ---
    kind: PersistentVolumeClaim
    apiVersion: v1
    metadata:
      name: pvc-nas
      namespace: argo
    spec:
      accessModes:
        - ReadWriteMany
      resources:
        requests:
          storage: 100Gi
      selector:
        matchLabels:
          alicloud-pvname: pv-nas
  2. Use the following YAML template to mount and use a NAS volume in the workflow:

    apiVersion: argoproj.io/v1alpha1
    kind: Workflow
    metadata:
      generateName: volumes-existing-
      namespace: argo
    spec:
      entrypoint: volumes-existing-example
      volumes:
      # Pass the existing volume workdir to the volumes-existing-example template.
      # The syntax follows the same structure as Kubernetes pod specifications.
      - name: workdir
        persistentVolumeClaim:
          claimName: pvc-nas
    
      templates:
      - name: volumes-existing-example
        steps:
        - - name: generate
            template: whalesay
        - - name: print
            template: print-message
    
      - name: whalesay
        container:
          image: mirrors-ssl.aliyuncs.com/busybox:latest
          command: [sh, -c]
          args: ["echo generating message in volume; echo hello world | tee /mnt/vol/hello_world.txt"]
          volumeMounts:
          - name: workdir
            mountPath: /mnt/vol
    
      - name: print-message
        container:
          image: mirrors-ssl.aliyuncs.com/alpine:latest
          command: [sh, -c]
          args: ["echo getting message from volume; find /mnt/vol; cat /mnt/vol/hello_world.txt"]
          volumeMounts:
          - name: workdir
            mountPath: /mnt/vol

Use CPFS 2.0 volumes

  1. Create a CPFS 2.0 shared volume by executing the command below.

    For additional details, see CPFS 2.0 static volumes.

    cat << EOF | kubectl apply -f -
    apiVersion: v1
    kind: PersistentVolume
    metadata:
      name: pv-cpfs
      namespace: argo
      labels:
        alicloud-pvname: pv-cpfs
    spec:
      accessModes:
      - ReadWriteOnce
      capacity:
        storage: 1000Gi
      csi:
        driver: nasplugin.csi.alibabacloud.com
        volumeAttributes:
          mountProtocol: cpfs-nfs             # Use the NFS protocol to mount the CPFS file system.
          path: "/share"                      # The mount directory must start with /share.
          volumeAs: subpath
          server: "<your cpfs id, e.g cpfs-****>.<regionID>.cpfs.aliyuncs.com"      # The domain name before the mount target.
        volumeHandle: pv-cpfs # Specify the name of the PV.
      mountOptions:
      - rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2,noresvport
      - vers=3
    ---
    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
      name: pvc-cpfs
      namespace: argo
    spec:
      accessModes:
      - ReadWriteOnce
      resources:
        requests:
          storage: 1000Gi
      selector:
        matchLabels:
          alicloud-pvname: pv-cpfs
    EOF
  2. Use the sample code below to create a workflow that incorporates the CPFS 2.0 volume.

    apiVersion: argoproj.io/v1alpha1
    kind: Workflow
    metadata:
      generateName: volumes-existing-
      namespace: argo
    spec:
      entrypoint: volumes-existing-example
      volumes:
      # Pass my-existing-volume as an argument to the volumes-existing-example template.
      # Same syntax as k8s Pod spec.
      - name: workdir
        persistentVolumeClaim:
          claimName: pvc-cpfs
    
      templates:
      - name: volumes-existing-example
        steps:
        - - name: generate
            template: whalesay
        - - name: print
            template: print-message
    
      - name: whalesay
        container:
          image: mirrors-ssl.aliyuncs.com/busybox:latest
          command: [sh, -c]
          args: ["echo generating message in volume; echo hello world | tee /mnt/vol/hello_world.txt"]
          volumeMounts:
          - name: workdir
            mountPath: /mnt/vol
    
      - name: print-message
        container:
          image: mirrors-ssl.aliyuncs.com/alpine:latest
          command: [sh, -c]
          args: ["echo getting message from volume; find /mnt/vol; cat /mnt/vol/hello_world.txt"]
          volumeMounts:
          - name: workdir
            mountPath: /mnt/vol