Volumes let workflow steps share data by reading and writing to a common mount path. This topic describes how to mount existing Object Storage Service (OSS) volumes and File Storage NAS volumes in a workflow cluster and reference them in Argo Workflows.
Prerequisites
Before you begin, ensure that you have:
-
A workflow cluster created in ACK
-
An OSS bucket (for OSS volumes) or a NAS file system (for NAS volumes) in the same region as your cluster
-
An AccessKey ID and AccessKey Secret with read/write access to the storage resource
Use cases
-
Share intermediate data between steps: Write output from one step (for example, whalesay) to a volume, then read it in a subsequent step (for example, print-message).
-
Persist workflow artifacts: Store workflow outputs in OSS or NAS for downstream use after the workflow completes.
Background: PV, PVC, and Argo Workflows
A PersistentVolume (PV) is a cluster-level storage resource provisioned by an administrator. A PersistentVolumeClaim (PVC) is a request for that storage by a workload — Pods and workflow steps reference the PVC, not the PV directly. Think of the PV as the actual storage and the PVC as the claim to use it.
The workflow YAML uses the same volumes and volumeMounts syntax as a Kubernetes Pod spec, so existing Kubernetes knowledge applies directly.
Use an OSS volume
Click to view the YAML sample for using an OSS storage volume
Apply the following YAML to create a Secret, a PV, and a PVC for your OSS bucket.
For the full list of ossfs options, see Mount a statically provisioned ossfs 1.0 volume.
apiVersion: v1
kind: Secret
metadata:
name: oss-secret # Referenced by the PV's nodePublishSecretRef
namespace: default
stringData:
akId: <your-access-key-id> # Replace with your AccessKey ID
akSecret: <your-access-key-secret> # Replace with your AccessKey Secret
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-oss
labels:
alicloud-pvname: pv-oss # Used by the PVC selector to bind to this PV
spec:
capacity:
storage: 5Gi
accessModes:
- ReadWriteMany # Allows concurrent access from multiple workflow steps
persistentVolumeReclaimPolicy: Retain
csi:
driver: ossplugin.csi.alibabacloud.com
volumeHandle: pv-oss # Must match metadata.name above
nodePublishSecretRef:
name: oss-secret # Secret containing the AccessKey credentials
namespace: default
volumeAttributes:
bucket: <your-bucket-name> # Replace with your OSS bucket name
url: "oss-<region-id>-internal.aliyuncs.com" # Internal endpoint; replace <region-id> with your region, for example, cn-beijing
otherOpts: "-o umask=022 -o max_stat_cache_size=1000000 -o allow_other"
path: "/" # Root directory of the bucket; use a subdirectory if needed, for example, testdir/testdir1
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvc-oss # Referenced by the workflow's volumes[].persistentVolumeClaim.claimName
namespace: default
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 5Gi
selector:
matchLabels:
alicloud-pvname: pv-oss # Binds this PVC to the pv-oss PV
OSS volume parameters
Configure additional ossfs options in the otherOpts field using the -o key=value format.
| Parameter |
Description |
Default |
umask |
Permission mask for files in OSSFS. Setting umask=022 gives files permission 755. Files uploaded via the OSS SDK or console default to 640 in OSSFS. Use this parameter when you need to separate reads and writes. |
— |
max_stat_cache_size |
Maximum number of file metadata entries to cache. Caching accelerates file traversal and reads. Set to 0 to disable caching if you need strong data consistency (for example, when other processes modify the bucket directly via the OSS console, SDK, or ossutil). |
100,000 entries (~40 MB) |
allow_other |
Allows other OS users to access the mounted directory. These users cannot access the files inside the directory. |
— |
For additional parameters, see Options supported by ossfs.
Click to view the YAML sample for mounting and using an OSS storage volume in a workflow
Apply the following workflow YAML. It runs two sequential steps: whalesay writes a message to the shared volume, and print-message reads it back.
apiVersion: argoproj.io/v1alpha1
kind: Workflow
metadata:
generateName: volumes-existing-
namespace: default
spec:
entrypoint: volumes-existing-example
volumes:
# Same syntax as a Kubernetes Pod spec
- name: workdir
persistentVolumeClaim:
claimName: pvc-oss # References the PVC created in Step 1
templates:
- name: volumes-existing-example
steps:
- - name: generate
template: whalesay
- - name: print
template: print-message
- name: whalesay
container:
image: docker/whalesay:latest
command: [sh, -c]
args: ["echo generating message in volume; cowsay hello world | tee /mnt/vol/hello_world.txt"]
volumeMounts:
# Same syntax as a Kubernetes Pod spec
- name: workdir
mountPath: /mnt/vol
- name: print-message
container:
image: alpine:latest
command: [sh, -c]
args: ["echo getting message from volume; find /mnt/vol; cat /mnt/vol/hello_world.txt"]
volumeMounts:
- name: workdir
mountPath: /mnt/vol
Use a NAS volume
Click to view the YAML sample for using a NAS volume
Apply the following YAML to create a statically provisioned NAS PV and PVC.
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-nas
labels:
alicloud-pvname: pv-nas # Used by the PVC selector to bind to this PV
spec:
capacity:
storage: 100Gi
accessModes:
- ReadWriteMany # Allows concurrent access from multiple workflow steps
csi:
driver: nasplugin.csi.alibabacloud.com
volumeHandle: pv-nas # Must match metadata.name above
volumeAttributes:
server: "<your-nas-filesystem-id>.cn-beijing.nas.aliyuncs.com" # Replace with your NAS mount target address
path: "/"
mountOptions:
- nolock,tcp,noresvport
- vers=3
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: pvc-nas # Referenced by the workflow's volumes[].persistentVolumeClaim.claimName
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 100Gi
selector:
matchLabels:
alicloud-pvname: pv-nas # Binds this PVC to the pv-nas PV
Click to view the YAML sample for mounting and using a NAS volume in a workflow
Apply the following workflow YAML. The structure is identical to the OSS example — only claimName differs.
apiVersion: argoproj.io/v1alpha1
kind: Workflow
metadata:
generateName: volumes-existing-
namespace: default
spec:
entrypoint: volumes-existing-example
volumes:
# Same syntax as a Kubernetes Pod spec
- name: workdir
persistentVolumeClaim:
claimName: pvc-nas # References the PVC created in Step 1
templates:
- name: volumes-existing-example
steps:
- - name: generate
template: whalesay
- - name: print
template: print-message
- name: whalesay
container:
image: docker/whalesay:latest
command: [sh, -c]
args: ["echo generating message in volume; cowsay hello world | tee /mnt/vol/hello_world.txt"]
volumeMounts:
- name: workdir
mountPath: /mnt/vol
- name: print-message
container:
image: alpine:latest
command: [sh, -c]
args: ["echo getting message from volume; find /mnt/vol; cat /mnt/vol/hello_world.txt"]
volumeMounts:
- name: workdir
mountPath: /mnt/vol