When workflow steps need to exchange files or status, mount a shared volume so each step reads and writes from the same path. This document covers mounting statically provisioned OSS and NAS volumes to Argo Workflows steps.
When to use OSS vs NAS
Both OSS and NAS volumes support ReadWriteMany, so multiple steps can access them concurrently. Choose based on your workload:
-
OSS volume: Best for object-based workloads — reading and writing individual files, sharing model artifacts, or storing pipeline outputs. Backed by Object Storage Service (OSS).
-
NAS volume: Best for POSIX-compliant workloads that require file-system semantics, such as shared scratch space or workloads that expect standard file permissions and directory operations. Backed by Apsara File Storage NAS.
For usage notes, limits, and billing details, see Storage overview.
Prerequisites
Before you begin, ensure that you have:
-
An Argo Workflows cluster with the
argonamespace created -
An OSS bucket or a NAS file system, depending on the volume type you want to mount
-
The AccessKey ID and AccessKey secret of an Alibaba Cloud account with read/write access to the OSS bucket (required for OSS volumes only; NAS volumes authenticate at the file-system level and do not require an AccessKey)
Use an OSS volume
Step 1: Create the PV and PVC
Create a Secret, a PersistentVolume (PV), and a PersistentVolumeClaim (PVC) using the following YAML. Replace the placeholder values before applying.
For more information about static provisioning for OSS, see Mount a statically provisioned OSS volume.
apiVersion: v1
kind: Secret
metadata:
name: oss-secret
namespace: argo
stringData:
akId: <your-access-key-id>
akSecret: <your-access-key-secret>
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-oss
namespace: argo
labels:
alicloud-pvname: pv-oss
spec:
capacity:
storage: 5Gi
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Retain
csi:
driver: ossplugin.csi.alibabacloud.com
volumeHandle: pv-oss
nodePublishSecretRef:
name: oss-secret
namespace: argo
volumeAttributes:
bucket: <your-bucket-name>
url: "oss-<your-region-id>-internal.aliyuncs.com" # e.g., oss-cn-beijing-internal.aliyuncs.com
otherOpts: "-o max_stat_cache_size=0 -o allow_other -o multipart_size=30 -o parallel_count=20"
path: "/" # Root directory. To use a subdirectory, specify a path such as testdir/testdir1.
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvc-oss
namespace: argo
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 5Gi
selector:
matchLabels:
alicloud-pvname: pv-oss
Replace the following placeholders:
| Placeholder | Description | Example |
|---|---|---|
<your-access-key-id> |
AccessKey ID with read/write access to the OSS bucket | LTAI5tXxx |
<your-access-key-secret> |
AccessKey secret corresponding to the AccessKey ID | xXxXxXx |
<your-bucket-name> |
Name of the OSS bucket | my-workflow-bucket |
<your-region-id> |
Region ID of the OSS bucket | cn-beijing |
Optional ossfs parameters
The otherOpts field accepts ossfs mount options in the format -o <option> -o <option>. The following options are commonly used:
| Parameter | Description | When to use |
|---|---|---|
umask |
Sets the permission mask for files in ossfs. For example, umask=022 results in permission 755. The default is 640. |
Set this when different processes need read or execute access to files written by the workflow, such as when splitting reads and writes across steps. |
max_stat_cache_size |
Maximum number of files whose metadata ossfs caches locally. Caching speeds up LIST operations. |
Set to 0 to disable caching if files are modified outside ossfs (for example, through the OSS console, OSS SDKs, or ossutil). Without this, cached metadata becomes stale and LIST results may be inaccurate. |
allow_other |
Allows other users to access the mounted directory (but not the files inside). | Set this when you need other users to access the mounted directory. |
For the full list of ossfs options, see Options.
Step 2: Create a workflow that uses the OSS volume
The following workflow mounts the PVC as a shared volume. The generate step writes a file to the volume, and the print step reads it back — demonstrating cross-step data sharing.
apiVersion: argoproj.io/v1alpha1
kind: Workflow
metadata:
generateName: volumes-existing-
namespace: argo
spec:
entrypoint: volumes-existing-example
volumes:
# Same syntax as a Kubernetes pod spec.
- name: workdir
persistentVolumeClaim:
claimName: pvc-oss
templates:
- name: volumes-existing-example
steps:
- - name: generate
template: whalesay
- - name: print
template: print-message
- name: whalesay
container:
image: mirrors-ssl.aliyuncs.com/busybox:latest
command: [sh, -c]
args: ["echo generating message in volume; echo hello world | tee /mnt/vol/hello_world.txt"]
volumeMounts:
- name: workdir
mountPath: /mnt/vol
- name: print-message
container:
image: mirrors-ssl.aliyuncs.com/alpine:latest
command: [sh, -c]
args: ["echo getting message from volume; find /mnt/vol; cat /mnt/vol/hello_world.txt"]
volumeMounts:
- name: workdir
mountPath: /mnt/vol
Use a NAS volume
Step 1: Create the PV and PVC
Create a PersistentVolume (PV) and a PersistentVolumeClaim (PVC) using the following YAML. NAS volumes authenticate at the file-system level, so no Secret is required.
For more information about static provisioning for NAS, see Mount a statically provisioned NAS volume.
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-nas
namespace: argo
labels:
alicloud-pvname: pv-nas
spec:
capacity:
storage: 100Gi
accessModes:
- ReadWriteMany
csi:
driver: nasplugin.csi.alibabacloud.com
volumeHandle: pv-nas
volumeAttributes:
server: "<your-nas-filesystem-id>.cn-beijing.nas.aliyuncs.com"
path: "/"
mountOptions:
- nolock,tcp,noresvport
- vers=3
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: pvc-nas
namespace: argo
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 100Gi
selector:
matchLabels:
alicloud-pvname: pv-nas
Replace <your-nas-filesystem-id> with the ID of your NAS file system. The region in the server address must match the region of your NAS file system (for example, cn-beijing for China (Beijing)).
Step 2: Create a workflow that uses the NAS volume
The following workflow uses the same two-step pattern as the OSS example, but mounts the NAS PVC instead.
apiVersion: argoproj.io/v1alpha1
kind: Workflow
metadata:
generateName: volumes-existing-
namespace: argo
spec:
entrypoint: volumes-existing-example
volumes:
# Same syntax as a Kubernetes pod spec.
- name: workdir
persistentVolumeClaim:
claimName: pvc-nas
templates:
- name: volumes-existing-example
steps:
- - name: generate
template: whalesay
- - name: print
template: print-message
- name: whalesay
container:
image: mirrors-ssl.aliyuncs.com/busybox:latest
command: [sh, -c]
args: ["echo generating message in volume; echo hello world | tee /mnt/vol/hello_world.txt"]
volumeMounts:
- name: workdir
mountPath: /mnt/vol
- name: print-message
container:
image: mirrors-ssl.aliyuncs.com/alpine:latest
command: [sh, -c]
args: ["echo getting message from volume; find /mnt/vol; cat /mnt/vol/hello_world.txt"]
volumeMounts:
- name: workdir
mountPath: /mnt/vol
What's next
-
Storage overview — compare OSS and NAS volume characteristics, limits, and pricing
-
Mount a statically provisioned OSS volume — configure advanced OSS volume options
-
Mount a statically provisioned NAS volume — configure advanced NAS volume options