Mount an Object Storage Service (OSS) bucket as a statically provisioned Persistent Volume (PV) to give your Pods persistent, shared storage with a POSIX file system interface. ossfs 2.0 is optimized for sequential reads and high-bandwidth workloads, making it a good fit for AI training, big data analytics, and static content serving.
For performance benchmarks, see ossfs 2.0 client performance benchmarks.
How it works
Mounting an OSS bucket as a statically provisioned volume in a Container Service for Kubernetes (ACK) cluster involves four steps:
-
Choose an authentication method. Use RAM Roles for Service Accounts (RRSA) for production—it provides temporary, auto-rotating credentials and Pod-level permission isolation. Use an AccessKey for testing only, as it relies on a long-term static key.
-
Create a PV. Define a PV that registers your existing OSS bucket with the cluster, specifying the bucket name, endpoint, subdirectory, and authentication details.
-
Create a PVC. Create a Persistent Volume Claim (PVC) that binds to the PV you defined.
-
Deploy an application. Reference the PVC in your workload manifest to mount the OSS bucket into the container.
Usage notes
-
Supported workloads: ossfs 2.0 supports read-only and sequential-append write workloads. For random or concurrent writes, data consistency cannot be guaranteed—use ossfs 1.0 instead.
-
Data safety: Changes made to files at the mount point—whether from inside the Pod or on the host node—are immediately synced to the OSS bucket. Enable versioning on the bucket to protect against accidental deletion.
-
Health checks: Configure a liveness probe on Pods that use OSS volumes to verify that the mount point is accessible. Kubernetes automatically restarts a Pod if the probe fails, triggering a remount.
-
Multipart uploads: ossfs automatically uses multipart upload for files larger than 10 MB. If an upload is interrupted, incomplete parts remain in the bucket. Delete these parts manually or configure a lifecycle rule to clean them up automatically.
Method 1: Authenticate using RRSA (recommended)
RRSA authenticates Pods using temporary, auto-rotating credentials through OpenID Connect (OIDC) and Security Token Service (STS), and supports PV-level permission isolation. For background, see Use RRSA to authorize different pods to access different cloud services.
Prerequisites
Before you begin, make sure you have:
-
An ACK cluster running Kubernetes 1.26 or later. Upgrade the cluster if needed.
-
CSI plugin version 1.33.1 or later. To upgrade, see Update csi-plugin and csi-provisioner.
If your CSI version is earlier than 1.30.4 and you plan to use RRSA, see \[Product Changes\] Version upgrade and mounting process optimization of ossfs in CSI to configure RAM role authorization before proceeding.
-
An OSS bucket in the same Alibaba Cloud account as your cluster.
To mount an OSS bucket across accounts, use the RRSA authentication method. See FAQ about ossfs 2.0 volumes for details.
Step 1: Create a RAM role
Skip this step if you have already mounted an OSS volume in the cluster using RRSA.
-
Enable the RRSA feature in the ACK console.
-
Create a RAM role for an OIDC identity provider. The following table lists the key parameters for the sample role
demo-role-for-rrsa.Parameter Value Identity provider type Select OIDC. Identity provider Select the provider associated with your cluster, for example, ack-rrsa-<cluster_id>.oidc:iss Keep the default value. oidc:aud Keep the default value. oidc:sub Add a condition: Key = oidc:sub, Operator =StringEquals, Value =system:serviceaccount:ack-csi-fuse:csi-fuse-ossfs.ack-csi-fuseis the namespace where the ossfs client runs and cannot be changed.csi-fuse-ossfsis the ServiceAccount name and can be customized.Role name demo-role-for-rrsaTo change the ServiceAccount name, see FAQ about ossfs 2.0 volumes.
Step 2: Grant permissions to the RAM role
-
Create a custom policy to grant OSS access. For details, see Create custom policies. Replace
mybucketwith your actual bucket name.-
Read-only policy
{ "Statement": [ { "Action": [ "oss:Get*", "oss:List*" ], "Effect": "Allow", "Resource": [ "acs:oss:*:*:mybucket", "acs:oss:*:*:mybucket/*" ] } ], "Version": "1" } -
Read-write policy
{ "Statement": [ { "Action": "oss:*", "Effect": "Allow", "Resource": [ "acs:oss:*:*:mybucket", "acs:oss:*:*:mybucket/*" ] } ], "Version": "1" }
-
-
(Optional) If objects in the bucket are encrypted with a customer master key (CMK) in Key Management Service (KMS), grant KMS access permissions. See Encryption for details.
-
Attach the policy to the
demo-role-for-rrsarole. See Grant permissions to a RAM role.To use an existing RAM role that already has OSS access, modify its trust policy instead. See Use an existing RAM role.
Step 3: Create a PV
-
Create
ossfs2-pv.yamlwith the following content.The following PV mounts the OSS bucket
cnfs-oss-testas a 20 GiB read-only file system.apiVersion: v1 kind: PersistentVolume metadata: name: pv-ossfs2 # PV name spec: capacity: storage: 20Gi # Used for PVC matching only; does not cap OSS capacity accessModes: - ReadOnlyMany persistentVolumeReclaimPolicy: Retain csi: driver: ossplugin.csi.alibabacloud.com volumeHandle: pv-ossfs2 # Must match metadata.name exactly volumeAttributes: fuseType: ossfs2 # Required: specifies the ossfs 2.0 client bucket: cnfs-oss-test # OSS bucket name path: /subpath # Subdirectory to mount; leave blank to mount the root url: oss-cn-hangzhou-internal.aliyuncs.com # Internal endpoint (same region); use public endpoint for cross-region otherOpts: "-o close_to_open=false" # false (default): cache metadata for better small-file read performance # true: fetch fresh metadata on every file open (higher latency, useful when another system frequently updates objects) authType: "rrsa" # Authentication method roleName: "demo-role-for-rrsa" # RAM role created in Step 1Key parameters in
volumeAttributes:Parameter Required Description fuseTypeYes Must be ossfs2to use the ossfs 2.0 client.bucketYes Name of the OSS bucket to mount. pathNo Subdirectory within the bucket. Defaults to the root if left blank. urlYes OSS endpoint. Use an internal endpoint when the cluster and bucket are in the same region (or connected via Virtual Private Cloud (VPC)); use a public endpoint for cross-region access. Internal format: http(s)://oss-{region}-internal.aliyuncs.com. Public format:http(s)://oss-{region}.aliyuncs.com. Thevpc100-oss-{region}.aliyuncs.cominternal endpoint format is deprecated—switch to the new format.otherOptsNo Additional mount options in the format -o <option> -o <option>. For all available options, see ossfs 2.0 mount options.authTypeYes Set to rrsa.roleNameYes Name of the RAM role. To assign different permissions to different PVs, create a separate RAM role for each and reference it here. To use specified ARNs or a custom ServiceAccount with RRSA, see How do I use specified ARNs or a ServiceAccount with the RRSA authentication method?
-
Apply the manifest.
kubectl create -f ossfs2-pv.yaml -
Verify the PV is available.
kubectl get pv pv-ossfs2Expected output:
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS VOLUMEATTRIBUTESCLASS REASON AGE pv-ossfs2 20Gi ROX Retain Available <unset> 15s
Step 4: Create a PVC
-
Create
ossfs2-pvc-static.yamlwith the following content.kind: PersistentVolumeClaim apiVersion: v1 metadata: name: pvc-ossfs2 # PVC name namespace: default spec: accessModes: - ReadOnlyMany # Must match the PV resources: requests: storage: 20Gi # Must match the PV volumeName: pv-ossfs2 # Bind to this specific PV -
Create the PVC.
kubectl create -f ossfs2-pvc-static.yaml -
Verify the PVC is bound.
kubectl get pvc pvc-ossfs2Expected output:
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS VOLUMEATTRIBUTESCLASS AGE pvc-ossfs2 Bound pv-ossfs2 20Gi ROX <unset> 6s
Step 5: Deploy an application
-
Create
ossfs2-test.yamlto define a StatefulSet that mounts the PVC at/data.apiVersion: apps/v1 kind: StatefulSet metadata: name: ossfs2-test namespace: default spec: replicas: 1 selector: matchLabels: app: ossfs2-test template: metadata: labels: app: ossfs2-test spec: containers: - name: nginx image: anolis-registry.cn-zhangjiakou.cr.aliyuncs.com/openanolis/nginx:1.14.1-8.6 ports: - containerPort: 80 volumeMounts: - name: pvc-ossfs2 mountPath: /data volumes: - name: pvc-ossfs2 persistentVolumeClaim: claimName: pvc-ossfs2 -
Deploy the application.
kubectl create -f ossfs2-test.yaml -
Wait for the Pod to be running.
kubectl get pod -l app=ossfs2-testExpected output:
NAME READY STATUS RESTARTS AGE ossfs2-test-0 1/1 Running 0 12m -
Verify the OSS bucket is mounted.
kubectl exec -it ossfs2-test-0 -- ls /dataThe output should show the data in the OSS mount path.
Method 2: Authenticate using an AccessKey
Store an AccessKey pair in a Kubernetes Secret and reference it from the PV. This approach is simple to configure but uses a long-term static key.
If the AccessKey is revoked or its permissions change, all Pods using the volume immediately lose access. To restore access, update the Secret with new credentials and restart the affected Pods—this causes a brief service interruption. For production environments, use Method 1: Authenticate using RRSA to avoid this operational overhead.
Prerequisites
Before you begin, make sure you have:
-
An ACK cluster with CSI plugin version 1.33.1 or later. To upgrade, see Update csi-plugin and csi-provisioner.
-
An OSS bucket in the same Alibaba Cloud account as your cluster.
To mount an OSS bucket across accounts, use RRSA. See FAQ about ossfs 2.0 volumes.
Step 1: Create a RAM user and store the AccessKey
-
Create a RAM user. If you already have one, skip this step. See Create a RAM user.
-
Create a custom policy to grant OSS access. See Create custom policies. Replace
mybucketwith your actual bucket name.-
Read-only policy
{ "Statement": [ { "Action": [ "oss:Get*", "oss:List*" ], "Effect": "Allow", "Resource": [ "acs:oss:*:*:mybucket", "acs:oss:*:*:mybucket/*" ] } ], "Version": "1" } -
Read-write policy
{ "Statement": [ { "Action": "oss:*", "Effect": "Allow", "Resource": [ "acs:oss:*:*:mybucket", "acs:oss:*:*:mybucket/*" ] } ], "Version": "1" }
-
-
(Optional) If objects in the bucket are encrypted with a CMK in KMS, grant KMS access. See Encryption.
-
Attach the policy to the RAM user. See Grant permissions to a RAM user.
-
Create an AccessKey pair for the RAM user. See Create an AccessKey pair.
-
Store the AccessKey pair as a Kubernetes Secret. Replace
xxxxxxwith the actual values.kubectl create -n default secret generic oss-secret \ --from-literal='akId=xxxxxx' \ --from-literal='akSecret=xxxxxx'
Step 2: Create a PV
-
Create
ossfs2-pv-ak.yamlwith the following content.The following PV mounts the OSS bucket
cnfs-oss-testas a 20 GiB read-only file system.apiVersion: v1 kind: PersistentVolume metadata: name: pv-ossfs2 # PV name spec: capacity: storage: 20Gi # Used for PVC matching only accessModes: - ReadOnlyMany persistentVolumeReclaimPolicy: Retain csi: driver: ossplugin.csi.alibabacloud.com volumeHandle: pv-ossfs2 # Must match metadata.name exactly nodePublishSecretRef: name: oss-secret # Secret created in Step 1 namespace: default volumeAttributes: fuseType: ossfs2 # Required: specifies the ossfs 2.0 client bucket: cnfs-oss-test # OSS bucket name path: /subpath # Subdirectory to mount; leave blank to mount the root url: oss-cn-hangzhou-internal.aliyuncs.com # Internal endpoint (same region); use public endpoint for cross-region otherOpts: "-o close_to_open=false" # false (default): cache metadata for better small-file read performance # true: fetch fresh metadata on every file open (higher latency)Parameters in
nodePublishSecretRef:Parameter Required Description nameYes Name of the Secret that stores the AccessKey pair. namespaceYes Namespace where the Secret is located. Parameters in
volumeAttributes: same as Method 1, Step 3, exceptauthTypeandroleNameare not used. -
Apply the manifest.
kubectl create -f ossfs2-pv-ak.yaml -
Verify the PV is available.
kubectl get pv pv-ossfs2Expected output:
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS VOLUMEATTRIBUTESCLASS REASON AGE pv-ossfs2 20Gi ROX Retain Available <unset> 15s
Step 3: Create a PVC
-
Create
ossfs2-pvc-static.yaml.kind: PersistentVolumeClaim apiVersion: v1 metadata: name: pvc-ossfs2 namespace: default spec: accessModes: - ReadOnlyMany resources: requests: storage: 20Gi volumeName: pv-ossfs2 -
Create the PVC.
kubectl create -f ossfs2-pvc-static.yaml -
Verify the PVC is bound.
kubectl get pvc pvc-ossfs2Expected output:
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS VOLUMEATTRIBUTESCLASS AGE pvc-ossfs2 Bound pv-ossfs2 20Gi ROX <unset> 6s
Step 4: Deploy an application
Follow the same steps as Method 1, Step 5.
Apply in production
| Category | Recommendation |
|---|---|
| Security | Use RRSA for all production workloads. It provides temporary, auto-rotating credentials via OIDC and STS, and enables fine-grained, Pod-level permission isolation. |
| Least privilege | Grant only the permissions the application needs—read-only or read-write—scoped to the specific bucket. |
| Endpoint | Use an internal endpoint when the cluster and bucket are in the same region to avoid public data transfer costs and reduce latency. |
| Mount options | Use -o close_to_open=false (default) to cache metadata and reduce latency for small-file reads. Switch to -o close_to_open=true only when Pods need to see updates from another writer immediately. |
| Workload fit | ossfs 2.0 is well-suited for AI training, inference, big data processing, and autonomous driving workloads. It is not suitable for workloads that require random writes, such as databases or collaborative editing tools. |
| Incomplete uploads | Configure a lifecycle rule on the bucket to automatically delete incomplete multipart upload parts to avoid unnecessary storage costs. |
| Health checks | Configure a liveness probe on each Pod to check mount point availability. If the mount fails, Kubernetes restarts the Pod and triggers a remount. |
| Monitoring | Use container storage monitoring to track volume performance and set up alerts to catch issues early. |