If your applications need to store unstructured data such as images, audio, and video, mount an Object Storage Service (OSS) bucket as a persistent volume (PV) in your ACS cluster. This topic shows you how to mount a statically provisioned OSS volume using kubectl or the ACS console, and how to verify that the volume supports data sharing and persistence across pods.
Background
OSS is a secure, cost-effective, high-capacity, and highly reliable cloud storage service. It is designed for unstructured data that is not frequently modified. For more information, see Storage overview.
OSS volumes are mounted as a local file system using a Filesystem in Userspace (FUSE) client. Because FUSE-based clients sit between the application and OSS object APIs, they have inherent POSIX compatibility limitations. ACS supports two clients.
Choose a client
| Scenario | Client | Description |
|---|---|---|
| General read/write, or scenarios requiring user permission configuration | ossfs 1.0 | Supports most POSIX operations, including random writes, append writes, and user permission settings |
| Read-intensive workloads — AI training, inference, big data, autonomous driving | ossfs 2.0 | Optimized for sequential reads and append writes; significantly higher throughput than ossfs 1.0. Currently supports GPU computing power only. To use CPU computing power, submit a ticket. |
When you're unsure, use ossfs 1.0. It provides broader POSIX compatibility and more stable operation across general workloads.
For workloads with separated read and write operations (for example, breakpoint saving or persistent log writing where reads and writes don't overlap), use two volumes: an ossfs 2.0 volume for the read-only path and an ossfs 1.0 volume for the write path.
POSIX compatibility
OSS volumes cannot implement all POSIX semantics efficiently against the underlying object storage API. Operations that would require multiple API round trips (such as atomic rename), or that have no object storage equivalent (such as mutable file permissions), are either limited or unsupported.
The following table summarizes POSIX API support for both clients.
POSIX API support
Performance benchmarks
ossfs 2.0 significantly outperforms ossfs 1.0 for sequential and concurrent read workloads.
-
Sequential write (single-threaded, large files): ~18x higher bandwidth
-
Sequential read (single-threaded, large files): ~8.5x higher bandwidth
-
Sequential read (4 threads, large files): >5x higher bandwidth
-
Concurrent small file reads (128 threads): >280x higher bandwidth
If read/write performance doesn't meet your requirements, see Best practices for optimizing the performance of OSS volumes.
Prerequisites
Before you begin, ensure that you have:
-
The managed-csiprovisioner component installed in your ACS cluster. To verify, go to the cluster management page in the ACS console, then choose Operations > Add-ons and check the Storage tab.
Usage notes
The following notes apply to ossfs 1.0 (general read/write scenarios). Most do not apply to ossfs 2.0, which supports a limited subset of POSIX operations.
-
ACS supports only statically provisioned OSS volumes. Dynamically provisioned OSS volumes are not supported.
-
The
renameoperation for files and directories is not atomic. -
Avoid concurrent writes, or compression and decompression operations directly in the mount path.
ImportantIn multi-writer scenarios, you must coordinate writes across clients. ACS does not guarantee data consistency for conflicts caused by concurrent writes.
-
Hard links are not supported.
-
Buckets with a StorageClass of Archive Storage, Cold Archive, or Deep Cold Archive cannot be mounted.
-
For ossfs 1.0,
readdirsends aheadObjectrequest for every object in the path to retrieve extended metadata. When the path contains many files, this can degrade performance. If file permissions are not required in your scenario, enable the-o readdir_optimizeparameter. For more information, see New readdir optimization feature.
Step 1: Create an OSS bucket
-
Log on to the OSS console. In the left navigation pane, click Buckets, then click Create Bucket.
-
Configure the bucket parameters and click Create. For all parameters, see Create buckets.
Parameter Description Bucket Name Enter a globally unique name. The name cannot be changed after creation. Region Select Region-specific and choose the region where your ACS cluster resides. This lets pods access the bucket over the internal network. -
(Optional) To mount a subdirectory of the bucket, create it now. On the Buckets page, click the bucket name, then choose Files > Objects and click Create Directory.
-
Get the bucket endpoint. On the Buckets page, click the bucket name, go to the Overview tab, and copy the endpoint from the Port section.
-
If the bucket and your ACS cluster are in the same region, copy the VPC endpoint.
-
If the bucket is region-agnostic or in a different region, copy the public endpoint.
-
-
Get an AccessKey pair to authorize access to OSS. For details, see Obtain an AccessKey pair.
To mount a bucket that belongs to a different Alibaba Cloud account, use the AccessKey pair from that account.
Step 2: Mount an OSS volume
ossfs 1.0
You can mount an ossfs 1.0 volume using kubectl or the ACS console.
kubectl
Create a PV
-
Save the following YAML as
oss-pv.yaml. Replace<your AccessKey ID>,<your AccessKey Secret>,<your OSS Bucket Name>, and<your OSS Bucket Endpoint>with your actual values.-
umask=022: Sets file permissions to 755. Resolves permission issues for objects uploaded via the SDK or OSS console (default permission: 640). Recommended for read/write-split or multi-user access scenarios. -
max_stat_cache_size=100000: Caches up to 100,000 object metadata entries in memory, improvinglsandstatperformance. Cached metadata may become stale if objects are modified via the OSS console, SDK, or ossutil. Set to0to disable caching, or usestat_cache_expireto reduce the expiration time. -
allow_other: Allows users other than the mounting user to access the mount target. Useful in multi-user shared environments.
Parameter Description Required Default alicloud-pvnameLabel used to bind a PVC to this PV Yes — storageClassNameUsed only to bind a PVC. No actual StorageClass association is required. Must match spec.storageClassNamein the PVC.Yes — storageDeclared storage capacity. For statically provisioned OSS volumes, this is for declaration only — the actual available capacity is determined by the OSS console, not this value. Yes — accessModesAccess mode for the volume Yes — persistentVolumeReclaimPolicyReclaim policy after the PVC is released No RetaindriverCSI driver name. Set to ossplugin.csi.alibabacloud.comfor the Alibaba Cloud OSS CSI plugin.Yes — volumeHandleUnique identifier for the PV. Must match metadata.name.Yes — nodePublishSecretRefReferences the Secret that stores the AccessKey pair Yes — bucketOSS bucket name Yes — urlOSS bucket endpoint. Use the VPC endpoint (for example, oss-cn-shanghai-internal.aliyuncs.com) if the bucket and cluster are in the same region; use the public endpoint (for example,oss-cn-shanghai.aliyuncs.com) otherwise.Yes — otherOptsAdditional mount options in -o * -o *format. For example:-o umask=022 -o max_stat_cache_size=100000 -o allow_other. See Options supported by ossfs and ossfs 1.0 configuration best practices.No — apiVersion: v1 kind: Secret metadata: name: oss-secret namespace: default stringData: akId: <your AccessKey ID> akSecret: <your AccessKey Secret> --- apiVersion: v1 kind: PersistentVolume metadata: name: oss-pv labels: alicloud-pvname: oss-pv spec: storageClassName: test capacity: storage: 20Gi accessModes: - ReadWriteMany persistentVolumeReclaimPolicy: Retain csi: driver: ossplugin.csi.alibabacloud.com volumeHandle: oss-pv nodePublishSecretRef: name: oss-secret namespace: default volumeAttributes: bucket: "<your OSS Bucket Name>" url: "<your OSS Bucket Endpoint>" otherOpts: "-o umask=022 -o allow_other"The following table describes the PV parameters. Common
otherOptsvalues: -
-
Create the Secret and PV:
kubectl create -f oss-pv.yaml -
Verify the PV is available:
kubectl get pvExpected output:
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS VOLUMEATTRIBUTESCLASS REASON AGE oss-pv 20Gi RWX Retain Available test <unset> 9s
Create a PVC
-
Save the following YAML as
oss-pvc.yaml.Parameter Description Required storageClassNameMust match spec.storageClassNamein the PVYes accessModesAccess mode. Must match the PV. Yes storageStorage capacity to allocate to the pod. Cannot exceed the PV capacity. Yes alicloud-pvnameLabel selector used to bind to the PV. Must match metadata.labels.alicloud-pvnamein the PV.Yes apiVersion: v1 kind: PersistentVolumeClaim metadata: name: oss-pvc spec: storageClassName: test accessModes: - ReadWriteMany resources: requests: storage: 20Gi selector: matchLabels: alicloud-pvname: oss-pv -
Create the PVC:
kubectl create -f oss-pvc.yaml -
Verify the PVC is bound to the PV:
kubectl get pvcExpected output:
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS VOLUMEATTRIBUTESCLASS AGE oss-pvc Bound oss-pv 20Gi RWX test <unset> 6s
Create an application and mount the OSS volume
-
Save the following YAML as
oss-test.yaml. This creates a Deployment with two pods, both mounting the OSS bucket at/data.apiVersion: apps/v1 kind: Deployment metadata: name: oss-test labels: app: nginx spec: replicas: 2 selector: matchLabels: app: nginx template: metadata: labels: app: nginx spec: containers: - name: nginx image: registry.cn-hangzhou.aliyuncs.com/acs-sample/nginx:latest ports: - containerPort: 80 volumeMounts: - name: pvc-oss mountPath: /data volumes: - name: pvc-oss persistentVolumeClaim: claimName: oss-pvc -
Create the Deployment:
kubectl create -f oss-test.yaml -
Verify both pods are running:
kubectl get pod | grep oss-testExpected output:
oss-test-****-***a 1/1 Running 0 28s oss-test-****-***b 1/1 Running 0 28s -
View the mount path. By default, the directory is empty and no output is returned.
kubectl exec oss-test-****-***a -- ls /data
ACS console
Create a PV
-
Log on to the ACS console. On the Clusters page, click the cluster name to go to the cluster management page.
-
In the left navigation pane, choose Volumes > Persistent Volumes, then click Create.
-
Configure the parameters and click Create. After creation, the PV appears on the Persistent Volumes page with no PVC bound yet.
Parameter Description Example PV Type Select OSS OSS Name Custom name for the PV oss-pv Capacity Declared storage capacity. For statically provisioned OSS volumes, the actual available capacity is determined by the OSS console. 20 Gi Access Mode ReadOnlyMany: mounted by multiple pods in read-only mode. ReadWriteMany: mounted by multiple pods in read/write mode. ReadWriteMany Access Certificate Store the AccessKey pair in a Secret. Select Create Secret and fill in the namespace, name, AccessKey ID, and AccessKey secret. Namespace: default; Name: oss-secret Bucket ID Select the OSS bucket to mount oss-acs-*** OSS Path Directory to mount. Defaults to the root directory ( /). To mount a subdirectory (for example,/dir), make sure it exists first./ Endpoint Select Internal Endpoint if the bucket and cluster are in the same region; select Public Endpoint otherwise. Internal Endpoint
Create a PVC
-
In the left navigation pane, choose Volumes > Persistent Volume Claims, then click Create.
-
Configure the parameters and click Create. After creation, the PVC appears on the Persistent Volume Claims page with status Bound.
Parameter Description Example PVC Type Select OSS OSS Name Custom name for the PVC oss-pvc Allocation Mode Select Existing Volume Existing Volume Existing Volume Select the PV created earlier oss-pv Total Storage capacity to allocate to the pod. Cannot exceed the PV capacity. 20 Gi
Create an application and mount the OSS volume
-
In the left navigation pane, choose Workloads > Deployments, then click Create From Image.
-
Configure the deployment parameters and click Create. For all parameters, see Create a stateless application from a Deployment.
Configuration page Parameter Description Example Basic Information Application Name Custom name for the Deployment oss-test Number Of Replicas Number of pod replicas 2 Container Configuration Image Name Container image address registry.cn-hangzhou.aliyuncs.com/acs-sample/nginx:latest Required Resources vCPU and memory resources 0.25 vCPU, 0.5 GiB Volume Add Cloud Storage Claim Click to add a PVC. Set Mount Source to the PVC created earlier and Container Path to the mount path. Mount Source: oss-pvc; Container Path: /data -
On the Stateless page, click the application name. On the Pods tab, verify that all pods are in the Running state.
ossfs 2.0
Statically provisioned ossfs 2.0 volumes can only be mounted using kubectl. The ACS console does not support this operation.
Create a PV
-
Save the following YAML as
oss-pv.yaml. Replace<your AccessKey ID>,<your AccessKey Secret>,<your OSS Bucket Name>, and<your OSS Bucket Endpoint>with your actual values.Parameter Description Required Default fuseTypeSpecifies the FUSE client. Must be set to ossfs2to use ossfs 2.0.Yes — otherOptsMount options in -o * -o *format. The supported options differ from ossfs 1.0 and are not compatible. For example:-o close_to_open=false. See ossfs 2.0 mount options.No — apiVersion: v1 kind: Secret metadata: name: oss-secret namespace: default stringData: akId: <your AccessKey ID> akSecret: <your AccessKey Secret> --- apiVersion: v1 kind: PersistentVolume metadata: name: oss-pv labels: alicloud-pvname: oss-pv spec: storageClassName: test capacity: storage: 20Gi accessModes: - ReadWriteMany persistentVolumeReclaimPolicy: Retain csi: driver: ossplugin.csi.alibabacloud.com volumeHandle: oss-pv nodePublishSecretRef: name: oss-secret namespace: default volumeAttributes: fuseType: ossfs2 # Declares the use of the ossfs 2.0 client bucket: "<your OSS Bucket Name>" url: "<your OSS Bucket Endpoint>" otherOpts: "-o close_to_open=false" # Mount parameters differ from ossfs 1.0The ossfs 2.0 PV shares most parameters with ossfs 1.0. The key differences are:
close_to_open: Disabled by default. When enabled, ossfs 2.0 sends aGetObjectMetarequest each time a file is opened to fetch the latest metadata from OSS. This ensures up-to-date metadata but increases latency when reading many small files. For all other PV parameters, see the ossfs 1.0 PV parameter table above. -
Create the Secret and PV:
kubectl create -f oss-pv.yaml -
Verify the PV is available:
kubectl get pvExpected output:
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS VOLUMEATTRIBUTESCLASS REASON AGE oss-pv 20Gi RWX Retain Available test <unset> 9s
Create a PVC
The PVC YAML for ossfs 2.0 is identical to ossfs 1.0. Save the following as oss-pvc.yaml and apply it.
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: oss-pvc
spec:
storageClassName: test
accessModes:
- ReadWriteMany
resources:
requests:
storage: 20Gi
selector:
matchLabels:
alicloud-pvname: oss-pvkubectl create -f oss-pvc.yaml
Verify the PVC is bound:
kubectl get pvc
Expected output:
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS VOLUMEATTRIBUTESCLASS AGE
oss-pvc Bound oss-pv 20Gi RWX test <unset> 6s
Create an application and mount the OSS volume
-
Save the following YAML as
oss-test.yamland create the Deployment:apiVersion: apps/v1 kind: Deployment metadata: name: oss-test labels: app: nginx spec: replicas: 2 selector: matchLabels: app: nginx template: metadata: labels: app: nginx spec: containers: - name: nginx image: registry.cn-hangzhou.aliyuncs.com/acs-sample/nginx:latest ports: - containerPort: 80 volumeMounts: - name: pvc-oss mountPath: /data volumes: - name: pvc-oss persistentVolumeClaim: claimName: oss-pvckubectl create -f oss-test.yaml -
Verify both pods are running:
kubectl get pod | grep oss-testExpected output:
oss-test-****-***a 1/1 Running 0 28s oss-test-****-***b 1/1 Running 0 28s -
View the mount path. By default, the directory is empty and no output is returned.
kubectl exec oss-test-****-***a -- ls /data
Verify data sharing and persistence
The Deployment provisions two pods that share the same OSS bucket. Use the following steps to confirm the volume supports both data sharing across pods and persistence across pod restarts.
Verify shared storage
-
Get the pod names:
kubectl get pod | grep oss-testSample output:
oss-test-****-***a 1/1 Running 0 40s oss-test-****-***b 1/1 Running 0 40s -
Write a file from one pod:
kubectl exec oss-test-****-***a -- touch /data/test.txt -
Read the file from the other pod:
kubectl exec oss-test-****-***b -- ls /dataExpected output:
test.txtThe file written by
oss-test-**-*ais visible fromoss-test-**-*b, confirming shared storage is working.
Verify data persistence after pod restart
-
Restart the Deployment:
kubectl rollout restart deploy oss-test -
Wait for the new pods to reach Running status:
kubectl get pod | grep oss-testSample output:
oss-test-****-***c 1/1 Running 0 67s oss-test-****-***d 1/1 Running 0 49s -
Verify the data written earlier still exists in the new pod:
kubectl exec oss-test-****-***c -- ls /dataExpected output:
test.txtThe file persists after the Deployment restarts, confirming data persistence.