Mount an Object Storage Service (OSS) bucket as a persistent volume (PV) in an ACK cluster using ossfs 2.0 — the FUSE (Filesystem in Userspace)-based client optimized for sequential read and write workloads. With dynamic provisioning, a StorageClass acts as a template that automatically creates and binds a PV when you create a PersistentVolumeClaim (PVC), so your application can read and write OSS data through standard POSIX interfaces without managing PVs manually.
ossfs 2.0 excels in sequential throughput, making it well-suited for AI training, inference, and big data processing. For performance benchmarks, see ossfs 2.0 client performance benchmarks.
Before you begin
Check the following hard constraints before starting. ossfs 2.0 is not suitable for all workloads — proceeding without checking these will cost you time.
Stop here if any of these apply to you:
-
Random or concurrent writes: ossfs 2.0 cannot guarantee data consistency for random or concurrent write operations. Use ossfs 1.0 for databases, collaborative editing, or any workload that modifies file content in place.
-
Kubernetes version below 1.26: RRSA authentication requires Kubernetes 1.26 or later. Upgrade your cluster if needed.
-
CSI plugin version below 1.33.1: Both authentication methods require CSI plugin version 1.33.1 or later. Update csi-plugin and csi-provisioner if needed.
Before you begin, ensure that you have:
-
An ACK cluster with the CSI plugin installed
-
An OSS bucket in the same Alibaba Cloud account as your cluster. To mount a bucket across accounts, use RRSA authentication — see FAQ about ossfs 2.0 volumes for details
Usage notes
-
Data changes are immediate: Any file modification or deletion at an ossfs mount point — from inside a pod or from the host node — is immediately synchronized to the source OSS bucket. Enable versioning to protect against accidental data loss.
-
Health checks: Configure a liveness probe on pods that use OSS volumes to verify mount point availability. If the mount becomes unhealthy, Kubernetes automatically restarts the pod.
-
Multipart uploads: Files larger than 10 MB are automatically uploaded using multipart upload. If an upload is interrupted, incomplete parts remain in the bucket and incur storage costs. Configure a lifecycle rule to clean them up automatically, or delete them manually.
Choose an authentication method
| RRSA | AccessKey | |
|---|---|---|
| Credential type | Temporary, auto-rotating (via OIDC + STS) | Long-term static key |
| Permission scope | Pod-level isolation | Shared across all pods using the same Secret |
| Suitable for | Production, multi-tenant environments | Development and testing only |
| Risk on key exposure | Low — credentials expire automatically | High — key must be manually revoked and rotated |
| Key rotation | Automatic | Manual; requires updating the Secret and restarting pods |
Use RRSA for production environments. AccessKey authentication is acceptable for development and testing only — a static key that is exposed or misconfigured requires manual revocation and causes a service interruption during rotation.
Both methods follow the same overall workflow:
-
Set up credentials (RAM role for RRSA, or RAM user + Secret for AccessKey)
-
Create a StorageClass
-
Create a PVC
-
Mount the volume in your application
Method 1: Authenticate using RRSA
RRSA (RAM Roles for Service Accounts) provides pod-level permission isolation using temporary, auto-rotating credentials through OpenID Connect (OIDC) and Security Token Service (STS). For background, see Use RRSA to authorize different pods to access different cloud services.
If you used RRSA with a CSI plugin version earlier than 1.30.4, see \[Product Changes\] Version upgrade and mounting process optimization of ossfs in CSI for required RAM role authorization changes.
Step 1: Create a RAM role
Skip this step if you have already mounted an OSS volume in this cluster using RRSA.
-
Enable the RRSA feature in the ACK console.
-
Create a RAM role for an OIDC Identity Provider. Use the following parameters for the sample role
demo-role-for-rrsa:Parameter Value Identity Provider Type Select OIDC Identity Provider Select the provider associated with your cluster, such as ack-rrsa-<cluster_id>oidc:iss Keep the default value oidc:aud Keep the default value oidc:sub — Key Select oidc:suboidc:sub — Operator Select StringEquals oidc:sub — Value Enter system:serviceaccount:ack-csi-fuse:csi-fuse-ossfs. The namespaceack-csi-fuseis fixed and cannot be customized. The service account namecsi-fuse-ossfscan be changed — see FAQ about ossfs 2.0 volumes.Role Name Enter demo-role-for-rrsa
Step 2: Grant OSS permissions to the RAM role
-
Create a custom policy to grant OSS access to the role. Replace
mybucketwith your actual bucket name.-
Read-only access
{ "Statement": [ { "Action": [ "oss:Get*", "oss:List*" ], "Effect": "Allow", "Resource": [ "acs:oss:*:*:mybucket", "acs:oss:*:*:mybucket/*" ] } ], "Version": "1" } -
Read-write access
{ "Statement": [ { "Action": "oss:*", "Effect": "Allow", "Resource": [ "acs:oss:*:*:mybucket", "acs:oss:*:*:mybucket/*" ] } ], "Version": "1" }
-
-
(Optional) If objects in the bucket are encrypted with a customer master key (CMK) in Key Management Service (KMS), grant KMS access as well. See Encryption.
-
Grant the custom policy to the
demo-role-for-rrsarole.NoteTo reuse an existing RAM role that already has OSS access, modify its trust policy instead. See Use an existing RAM role and grant the required permissions to the RAM role.
Step 3: Create a StorageClass
-
Create
ossfs2-sc-rrsa.yaml:Parameter Required Description Default / behavior when omitted bucketYes The name of the OSS bucket to mount — pathYes The base path within the bucket. With volumeAs: sharepath, each PV gets a unique subdirectory under this path, such as/ack/<pv-name>Mounts the bucket root if left blank urlYes The OSS bucket endpoint. Use an internal endpoint ( http://oss-<region>-internal.aliyuncs.com) if the cluster and bucket are in the same region or connected via Virtual Private Cloud (VPC). Use a public endpoint (http://oss-<region>.aliyuncs.com) for cross-region access.NoteThe
vpc100-oss-<region>.aliyuncs.cominternal endpoint format is deprecated — switch to the new format.— fuseTypeYes Must be set to ossfs2to use the ossfs 2.0 client— authTypeYes Set to rrsa— roleNameYes The RAM role to assume. To assign different permissions per PV, create a separate RAM role for each permission set and specify it here — volumeAsNo Set to sharepathso each PV creates a separate subdirectory underpath— otherOptsNo Additional mount options in the format -o <flag> -o <flag>. For all available options, see ossfs 2.0 mount optionsNo additional options apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: ossfs2-sc parameters: bucket: cnfs-oss-test # The name of your OSS bucket. path: /subpath # The subdirectory to mount. Leave blank to mount the bucket root. url: oss-cn-hangzhou-internal.aliyuncs.com # Internal endpoint (same region or VPC); use public endpoint for cross-region. authType: rrsa roleName: demo-role-for-rrsa # The RAM role created in Step 1. fuseType: ossfs2 # Must be ossfs2 to use the ossfs 2.0 client. volumeAs: sharepath # Each PV gets a unique subdirectory under path, e.g., /subpath/ack/<pv-name>. otherOpts: "-o close_to_open=false" # false (default): cache metadata for lower latency. true: fetch fresh metadata on every file open (increases latency and API costs). provisioner: ossplugin.csi.alibabacloud.com # Fixed value. reclaimPolicy: Retain # Only Retain is supported. Deleting the PVC does not delete the PV or OSS data. volumeBindingMode: Immediate # OSS volumes have no zone-based node affinity requirements.The following table describes the
parametersfields: -
Apply the StorageClass:
kubectl create -f ossfs2-sc-rrsa.yaml -
Verify the StorageClass is created:
kubectl get sc ossfs2-scExpected output:
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE ossfs2-sc ossplugin.csi.alibabacloud.com Retain Immediate 10s
Step 4: Create a PVC
Creating the PVC triggers automatic provisioning of the underlying PV.
-
Create
ossfs2-pvc-dynamic.yaml:kind: PersistentVolumeClaim apiVersion: v1 metadata: name: pvc-ossfs2 namespace: default spec: accessModes: - ReadWriteMany resources: requests: storage: 20Gi storageClassName: ossfs2-sc -
Create the PVC:
kubectl create -f ossfs2-pvc-dynamic.yaml -
Verify the PVC is bound to the automatically created PV:
kubectl get pvc pvc-ossfs2Expected output:
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE pvc-ossfs2 Bound d-bp17y03tpy2b8x****** 20Gi RWX ossfs2-sc 25s
Step 5: Mount the volume in your application
-
Create
ossfs2-test.yaml. The following StatefulSet mounts the PVC at/data:apiVersion: apps/v1 kind: StatefulSet metadata: name: ossfs2-test namespace: default spec: replicas: 1 selector: matchLabels: app: ossfs2-test template: metadata: labels: app: ossfs2-test spec: containers: - name: nginx image: anolis-registry.cn-zhangjiakou.cr.aliyuncs.com/openanolis/nginx:1.14.1-8.6 ports: - containerPort: 80 volumeMounts: - name: pvc-ossfs2 mountPath: /data volumes: - name: pvc-ossfs2 persistentVolumeClaim: claimName: pvc-ossfs2 -
Deploy the application:
kubectl create -f ossfs2-test.yaml -
Verify the pod is running:
kubectl get pod -l app=ossfs2-testExpected output:
NAME READY STATUS RESTARTS AGE ossfs2-test-0 1/1 Running 0 2m -
Verify read and write access to the mount point:
# Write a test file kubectl exec -it ossfs2-test-0 -- touch /data/test.txt # List files at the mount point kubectl exec -it ossfs2-test-0 -- ls /dataThe output should include
test.txt, confirming the volume is mounted and writable.
Method 2: Authenticate using an AccessKey
This method uses a long-term static key. Use it for development and testing only. For production, use RRSA authentication instead.
If the AccessKey referenced by a PV is revoked or its permissions change, every pod using that volume loses access immediately. To restore access: update the credentials in the Secret, then restart the pods. This causes a brief service interruption and should be performed during a maintenance window.
Step 1: Create a RAM user and store credentials in a Secret
Create a RAM user and grant OSS permissions
-
Create a RAM user if you don't have one.
-
Create a custom policy to grant OSS access. Replace
mybucketwith your actual bucket name.-
Read-only access
{ "Statement": [ { "Action": [ "oss:Get*", "oss:List*" ], "Effect": "Allow", "Resource": [ "acs:oss:*:*:mybucket", "acs:oss:*:*:mybucket/*" ] } ], "Version": "1" } -
Read-write access
{ "Statement": [ { "Action": "oss:*", "Effect": "Allow", "Resource": [ "acs:oss:*:*:mybucket", "acs:oss:*:*:mybucket/*" ] } ], "Version": "1" }
-
-
(Optional) If objects in the bucket are encrypted with a CMK in KMS, grant KMS access as well. See Encryption.
-
Create an AccessKey pair for the RAM user.
Store the AccessKey in a Kubernetes Secret
Run the following command. Replace the placeholder values with your actual AccessKey ID and Secret:
kubectl create -n default secret generic oss-secret \
--from-literal='akId=<your-access-key-id>' \
--from-literal='akSecret=<your-access-key-secret>'
Step 2: Create a StorageClass
-
Create
ossfs2-sc.yaml:Secret configuration
Parameter Required Description csi.storage.k8s.io/node-publish-secret-nameYes The name of the Secret that stores the AccessKey csi.storage.k8s.io/node-publish-secret-namespaceYes The namespace where the Secret is located Volume configuration
Parameter Required Description Default / behavior when omitted fuseTypeYes Must be set to ossfs2to use the ossfs 2.0 client— bucketYes The name of the OSS bucket to mount — pathNo The mount path within the bucket, relative to the bucket root Mounts the bucket root if left blank urlYes The OSS bucket endpoint. Use an internal endpoint ( http://oss-<region>-internal.aliyuncs.com) for same-region or VPC-connected access, or a public endpoint (http://oss-<region>.aliyuncs.com) for cross-region access.NoteThe
vpc100-oss-<region>.aliyuncs.cominternal endpoint format is deprecated.— otherOptsNo Additional mount options in the format -o <flag> -o <flag>. For all available options, see ossfs 2.0 mount optionsNo additional options apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: ossfs2-sc parameters: csi.storage.k8s.io/node-publish-secret-name: oss-secret # The Secret created above. csi.storage.k8s.io/node-publish-secret-namespace: default # The namespace where the Secret lives. fuseType: ossfs2 # Must be ossfs2 to use the ossfs 2.0 client. bucket: cnfs-oss-test # The name of your OSS bucket. path: /subpath # The subdirectory to mount. Leave blank to mount the bucket root. url: oss-cn-hangzhou-internal.aliyuncs.com # Internal endpoint (same region or VPC); use public endpoint for cross-region. otherOpts: "-o close_to_open=false" # false (default): cache metadata for lower latency. true: fetch fresh metadata on every file open (increases latency and API costs). provisioner: ossplugin.csi.alibabacloud.com # Fixed value. reclaimPolicy: Retain # Only Retain is supported. Deleting the PVC does not delete the PV or OSS data. volumeBindingMode: Immediate # OSS volumes have no zone-based node affinity requirements. -
Create the StorageClass:
kubectl create -f ossfs2-sc.yaml
Step 3: Create a PVC
-
Create
ossfs2-pvc-dynamic.yaml:kind: PersistentVolumeClaim apiVersion: v1 metadata: name: pvc-ossfs2 namespace: default spec: accessModes: - ReadWriteMany resources: requests: storage: 20Gi storageClassName: ossfs2-sc -
Create the PVC:
kubectl create -f ossfs2-pvc-dynamic.yaml -
Verify the PVC is bound:
kubectl get pvc pvc-ossfs2Expected output:
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE pvc-ossfs2 Bound d-bp17y03tpy2b8x****** 20Gi RWX ossfs2-sc 25s
Step 4: Mount the volume in your application
-
Create
ossfs2-test.yaml:apiVersion: apps/v1 kind: StatefulSet metadata: name: ossfs2-test namespace: default spec: replicas: 1 selector: matchLabels: app: ossfs2-test template: metadata: labels: app: ossfs2-test spec: containers: - name: nginx image: anolis-registry.cn-zhangjiakou.cr.aliyuncs.com/openanolis/nginx:1.14.1-8.6 ports: - containerPort: 80 volumeMounts: - name: pvc-ossfs2 mountPath: /data volumes: - name: pvc-ossfs2 persistentVolumeClaim: claimName: pvc-ossfs2 -
Deploy the application:
kubectl create -f ossfs2-test.yaml -
Verify the pod is running:
kubectl get pod -l app=ossfs2-testExpected output:
NAME READY STATUS RESTARTS AGE ossfs2-test-0 1/1 Running 0 2m -
Verify read and write access:
# Write a test file kubectl exec -it ossfs2-test-0 -- touch /data/test.txt # List files at the mount point kubectl exec -it ossfs2-test-0 -- ls /dataThe output should include
test.txt, confirming the volume is mounted and writable.
Apply in production
| Category | Best practices |
|---|---|
| Security | Use RRSA — it provides temporary, auto-rotating credentials and pod-level permission isolation, eliminating the risk of static key exposure. Grant only the minimum permissions required (read-only or read-write, scoped to a specific bucket). |
| Performance | Use an internal endpoint when the cluster and bucket are in the same region to avoid public data transfer costs and reduce latency. For workloads that read many small files, keep close_to_open=false (default) to reduce metadata API calls. Set close_to_open=true only if another system frequently updates objects in the bucket and your pod needs immediate visibility. For additional tuning, see ossfs 2.0 mount options. |
| Workload fit | ossfs 2.0 is optimized for read-heavy and append-only workloads: AI training, inference, big data processing, and autonomous driving. It is not suitable for random-write workloads such as databases or collaborative editing. |
| Cost | Configure a lifecycle rule on your OSS bucket to automatically delete incomplete multipart uploads. This prevents accumulating storage costs from interrupted large file uploads. |
| Operations | Configure a liveness probe in your pods to check mount point availability — Kubernetes will restart the pod automatically if the mount fails. Set up container storage monitoring to track volume performance and receive alerts before issues affect your workload. |