For applications that require persistent storage or data sharing between pods, you can mount an Object Storage Service (OSS) bucket as an ossfs 2.0 volume using a statically provisioned Persistent Volume (PV) and a Persistent Volume Claim (PVC). This approach allows application containers to read and write data in OSS using standard POSIX interfaces, just like a local filesystem. It is ideal for scenarios such as big data analytics, AI training, and serving static content.
Compared to ossfs 1.0, ossfs 2.0 excels in sequential read and write performance, making it ideal for leveraging the high bandwidth of OSS.
For details on performance, see ossfs 2.0 client performance benchmarks.
Workflow overview
The process for mounting a statically provisioned ossfs 2.0 volume in a Container Service for Kubernetes (ACK) cluster is as follows:
|
Considerations
Workload suitability: ossfs 2.0 is designed for read-only and sequential-append write scenarios. For random or concurrent write scenarios, data consistency cannot be guaranteed. We recommend using ossfs 1.0 for these cases.
Data safety: Any modification or deletion of files in an ossfs mount point (either from within the pod or on the host node) is immediately synchronized with the source OSS bucket. To prevent accidental data loss, we recommend enabling versioning for the bucket.
Application health checks: Configure a health check (liveness probe) for pods that use OSS volumes. For example, verify that the mount point is still accessible. If the mount becomes unhealthy, the pod will be automatically restarted to restore connectivity.
Multipart upload management: When uploading large files (> 10 MB), ossfs automatically uses multipart uploads. If an upload is interrupted, incomplete parts will remain in the bucket. Manually delete these parts or configure a lifecycle rule to automatically clean up these parts to save storage costs.
Method 1: Authenticate using RRSA
By leveraging RRSA, you can achieve fine-grained, PV-level permission isolation for accessing cloud resources, which significantly reduces security risks. For details, see Use RRSA to authorize different pods to access different cloud services.
Prerequisites
You have an ACK cluster running Kubernetes 1.26 or later. Manually upgrade the cluster if needed.
The Container Storage Interface (CSI) plugin version is 1.33.1 or later. To upgrade it, see Update csi-plugin and csi-provisioner.
If you use RRSA in a version earlier than 1.30.4, see [Product Changes] Version upgrade and mounting process optimization of ossfs in CSI to configure the RAM role authorization.
You have created an OSS bucket in the same Alibaba Cloud account as your cluster.
To mount an OSS bucket across accounts, we recommend using the RRSA authentication method. For details, see FAQ about ossfs 2.0 volumes
Step 1: Create a RAM role
If you have mounted an OSS volume in the cluster using RRSA, skip this step. If this is your first time, follow these steps:
Enable the RRSA feature in the ACK console.
Create a RAM role for an OIDC IdP. This role will be assumed using RRSA.
The following table lists the key parameters for the sample role
demo-role-for-rrsa:Parameter
Description
Identity Provider Type
Select OIDC.
Identity Provider
Select the provider associated with your cluster, such as
ack-rrsa-<cluster_id>, where<cluster_id>is your actual cluster ID.Condition
oidc:iss: Keep the default value.
oidc:aud: Keep the default value.
oidc:sub: Manually add the following condition:
Key: Select oidc:sub.
Operator: Select StringEquals.
Value: Enter
system:serviceaccount:ack-csi-fuse:csi-fuse-ossfs.In this value,
ack-csi-fuseis the namespace where the ossfs client is located, and cannot be customized.csi-fuse-ossfsis the service account name, and can be changed as needed.For more information about how to modify the service account name, see FAQ about ossfs 2.0 volumes
Role Name
Enter
demo-role-for-rrsa.
Step 2: Grant permissions to the demo-role-for-rrsa role
Create a custom policy to grant OSS access permissions to the RAM user. For more information, see Create custom policies.
Select the read-only policy or read-write policy based on your business requirements. Replace
mybucketwith the name of the bucket you created.Policy that provides read-only permissions on OSS
Policy that provides read-write permissions on OSS
Optional. If the objects in the OSS bucket are encrypted by using a specified customer master key (CMK) in Key Management Service (KMS), you need to grant KMS access permissions to the RAM user. For more information, see Encryption.
Grant the required permissions to the demo-role-for-rrsa role. For more information, see Grant permissions to a RAM role.
NoteIf you want to use an existing RAM role that has OSS access permissions, you can modify the trust policy of the RAM role. For more information, see Use an existing RAM role and grant the required permissions to the RAM role.
Step 3: Create a PV
Create a PV to register an existing OSS bucket in the cluster.
Create a file named
ossfs2-pv.yamlwith the following content. This PV definition tells Kubernetes how to access your OSS bucket using the RRSA role.The following PV mounts an OSS bucket named
cnfs-oss-testas a 20 GB read-only file system for pods in the cluster to use.apiVersion: v1 kind: PersistentVolume metadata: name: pv-ossfs2 # PV name spec: capacity: storage: 20Gi # Define the volume capacity (this value is only for matching PVCs) accessModes: # Access mode - ReadOnlyMany persistentVolumeReclaimPolicy: Retain csi: driver: ossplugin.csi.alibabacloud.com volumeHandle: pv-ossfs2 # Must be the same as the PV name (metadata.name) volumeAttributes: fuseType: ossfs2 bucket: cnfs-oss-test # The name of the OSS Bucket path: /subpath # The subdirectory to mount. Leave it empty to mount the root directory. url: oss-cn-hangzhou-internal.aliyuncs.com # The endpoint of the region where the OSS Bucket is located otherOpts: "-o close_to_open=false" authType: "rrsa" roleName: "demo-role-for-rrsa" # The RAM role created earlierParameters in
volumeAttributes:Parameter
Required
Description
fuseTypeYes
Specifies the client to use for the mount. Must be set to
ossfs2to use the ossfs 2.0 client.bucketYes
The name of the OSS bucket you want to mount.
pathNo
The subdirectory within the OSS bucket to mount. If not specified, the root of the bucket will be mounted by default.
urlYes
The access endpoint for the OSS bucket.
Use an internal endpoint if your cluster nodes and the bucket are in the same region, or if a Virtual Private Cloud (VPC) connection is established.
Use a public endpoint if the mount node and the bucket are in different regions.
The following are common formats for different access endpoints:
Internal:
http://oss-{{regionName}}-internal.aliyuncs.comorhttps://oss-{{regionName}}-internal.aliyuncs.com.The internal access endpoint format
vpc100-oss-{{regionName}}.aliyuncs.comis deprecated. Switch to the new format as soon as possible.Public:
http://oss-{{regionName}}.aliyuncs.comorhttps://oss-{{regionName}}.aliyuncs.com.
otherOptsNo
Additional mount options passed to the OSS volume, specified as a string of space-separated flags in the format
-o *** -o ***. For example,-o close_to_open=false.The
close-to-openoption controls metadata consistency. When set tofalse(default), metadata is cached to improve performance for small file reads. When set totrue, the client fetches fresh metadata from OSS by sending aGetObjectMetarequest every time a file is opened, ensuring real-time consistency at the cost of increased latency and API calls.For more optional parameters, see ossfs 2.0 mount options.
authTypeYes
Set to
rrsato declare that the RRSA authentication method is used.roleNameYes
Set to the name of the RAM role you created or modified.
To assign different permissions for different PVs, create a separate RAM role for each permission set. Then, in each PV's definition, use the
roleNameparameter to associate it with the corresponding role.To use specified ARNs or a ServiceAccount with the RRSA authentication method, see How do I use specified ARNs or a ServiceAccount with the RRSA authentication method?.
Apply the PV definition.
kubectl create -f ossfs2-pv.yamlVerify the PV status.
kubectl get pv pv-ossfs2The output shows that the PV status is
Available.NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS VOLUMEATTRIBUTESCLASS REASON AGE pv-ossfs2 20Gi ROX Retain Available <unset> 15s
Step 4: Create a PVC
Create a PVC to declare the persistent storage capacity required by the application.
Create a file named
ossfs2-pvc-static.yamlwith the following YAML content.kind: PersistentVolumeClaim apiVersion: v1 metadata: name: pvc-ossfs2 # PVC name namespace: default spec: # The following configuration must be consistent with the PV accessModes: - ReadOnlyMany resources: requests: storage: 20Gi volumeName: pv-ossfs2 # The PV to bindCreate the PVC.
kubectl create -f ossfs2-pvc-static.yamlCheck the PVC status.
kubectl get pvc pvc-ossfs2The output shows that the PVC is bound to the PV.
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS VOLUMEATTRIBUTESCLASS AGE pvc-ossfs2 Bound pv-ossfs2 20Gi ROX <unset> 6s
Step 5: Create an application and mount the volume
You can now mount the PVC to an application.
Create a file named
ossfs2-test.yamlto define your application.The following YAML template creates a StatefulSet that contains one pod. The pod requests storage resources through a PVC named
pvc-ossfs2and mounts the volume to the/datapath.Deploy the application.
kubectl create -f ossfs2-test.yamlCheck the pod deployment status.
kubectl get pod -l app=ossfs2-testExpected output:
NAME READY STATUS RESTARTS AGE ossfs2-test-0 1/1 Running 0 12mVerify that the application can access the data in OSS.
kubectl exec -it ossfs2-test-0 -- ls /dataThe output should show the data in the OSS mount path.
Method 2: Authenticate using an AccessKey
ACK supports authenticating OSS volume mounts by storing a static AccessKey in a Kubernetes Secret. This approach is suitable for scenarios where a specific application requires long-term, fixed access permissions.
If the AccessKey referenced by the PV is revoked or its permissions are changed, any application using the volume will immediately lose access and encounter permission errors. To restore access, update the credentials in the Secret, then restart the application's pods to force a remount. This process causes a brief service interruption and should only be performed during a scheduled maintenance window.
To avoid the downtime associated with manual key rotation, we strongly recommend using the RRSA authentication method instead.
Prerequisites
You have an ACK cluster with CSI V1.33.1 or later installed. To upgrade the component, see Update csi-plugin and csi-provisioner.
You have an OSS bucket in the same Alibaba Cloud account as your cluster.
To mount an OSS bucket across accounts, we recommend using RRSA for authentication. For details, see FAQ about ossfs 2.0 volumes
Step 1: Create a RAM user with OSS access permissions and obtain an AccessKey pair
Create a RAM user and grant permissions
Create a RAM user. You can skip this step if you have an existing RAM user. For more information about how to create a RAM user, see Create a RAM user.
Create a custom policy to grant OSS access permissions to the RAM user. For more information, see Create custom policies.
Select the read-only policy or read-write policy based on your business requirements. Replace
mybucketwith the name of the bucket you created.Policy that provides read-only permissions on OSS
Policy that provides read-write permissions on OSS
Optional. If the objects in the OSS bucket are encrypted by using a specified customer master key (CMK) in Key Management Service (KMS), you need to grant KMS access permissions to the RAM user. For more information, see Encryption.
Grant OSS access permissions to the RAM user. For more information, see Grant permissions to a RAM user.
Create an AccessKey pair for the RAM user. For more information, see Create an AccessKey pair.
Create a Secret to store the AccessKey credentials
Run the following command to create a Secret for OSS authentication. Replace akId and akSecret with your actual credentials.
kubectl create -n default secret generic oss-secret --from-literal='akId=xxxxxx' --from-literal='akSecret=xxxxxx'Step 2: Create a PV
Register an existing OSS bucket in the cluster by creating a PV.
Create a file named
ossfs2-pv-ak.yamlwith the following content. This PV definition includes anodePublishSecretRefthat points to the Secret you created.The following PV mounts an OSS bucket named
cnfs-oss-testas a 20 GB read-only file system for pods in the cluster to use.apiVersion: v1 kind: PersistentVolume metadata: name: pv-ossfs2 # PV name spec: capacity: storage: 20Gi # Define the volume capacity (this value is only for matching PVCs) accessModes: # Access mode - ReadOnlyMany persistentVolumeReclaimPolicy: Retain csi: driver: ossplugin.csi.alibabacloud.com volumeHandle: pv-ossfs2 # Must be the same as the PV name (metadata.name) # Use the Secret created earlier nodePublishSecretRef: name: oss-secret # The name of the Secret that stores the AccessKey information namespace: default # The namespace where the Secret is located volumeAttributes: fuseType: ossfs2 bucket: cnfs-oss-test # The name of the OSS Bucket path: /subpath # The subdirectory to mount. Leave it empty to mount the root directory. url: oss-cn-hangzhou-internal.aliyuncs.com # The endpoint of the region where the OSS Bucket is located otherOpts: "-o close_to_open=false"Parameters in
nodePublishSecretRef:Parameter
Required
Description
nameYes
The name of the Secret that stores the AccessKey information.
namespaceYes
The namespace where the Secret storing the AccessKey information is located.
Parameters in
volumeAttributes:Parameter
Required
Description
fuseTypeYes
Specifies the client to use for the mount. Must be set to
ossfs2to use the ossfs 2.0 client.bucketYes
The name of the OSS bucket you want to mount.
pathNo
The mount path of the OSS bucket, which represents the directory structure relative to the bucket root.
urlYes
The endpoint of the OSS bucket you want to mount. You can retrieve the endpoint from the Overview page of the bucket in the OSS console.
If the bucket is mounted to a node in the same region as the bucket or the bucket can be connected to the node through a virtual private cloud (VPC), use the internal endpoint of the bucket.
If the bucket is mounted to a node in a different region, use the public endpoint of the bucket.
Public endpoints and internal endpoints have different formats:
Format of internal endpoints:
http://oss-{{regionName}}-internal.aliyuncs.comorhttps://oss-{{regionName}}-internal.aliyuncs.com.Format of public endpoints:
http://oss-{{regionName}}.aliyuncs.comorhttps://oss-{{regionName}}.aliyuncs.com.
ImportantThe
vpc100-oss-{{regionName}}.aliyuncs.comformat for internal endpoints is deprecated.otherOptsNo
Enter custom parameters for the OSS volume in the format
-o *** -o ***, for example,-o close_to_open=false.close-to-open: Disabled by default. If enabled, the system sends a GetObjectMeta request to OSS each time a file is opened to get the latest metadata. This ensures metadata is always up-to-date. However, in scenarios with many small file reads, frequent metadata queries can significantly increase access latency.
For more optional parameters, see ossfs 2.0 mount options.
Apply the PV definition.
kubectl create -f ossfs2-pv.yamlCheck the PV status.
kubectl get pv pv-ossfs2The following output shows that the PV is
Available:NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS VOLUMEATTRIBUTESCLASS REASON AGE pv-ossfs2 20Gi ROX Retain Available <unset> 15s
Step 3: Create a PVC
Create a PVC to declare the persistent storage capacity required by the application.
Create a file named
ossfs2-pvc-static.yamlto claim the PV.kind: PersistentVolumeClaim apiVersion: v1 metadata: name: pvc-ossfs2 # PVC name namespace: default spec: # The following configuration must be consistent with the PV accessModes: - ReadOnlyMany resources: requests: storage: 20Gi volumeName: pv-ossfs2 # The PV to bindCreate the PVC.
kubectl create -f ossfs2-pvc-static.yamlVerify that the PVC is
Boundto the PV.kubectl get pvc pvc-ossfs2Expected output:
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS VOLUMEATTRIBUTESCLASS AGE pvc-ossfs2 Bound pv-ossfs2 20Gi ROX <unset> 6s
Step 4: Create an application and mount the volume
You can now mount the PVC to an application.
Create a file named
ossfs2-test.yamlto define your application.The following YAML template creates a StatefulSet that contains one pod. The pod requests storage resources through a PVC named
pvc-ossfs2and mounts the volume to the/datapath.Deploy the application.
kubectl create -f ossfs2-test.yamlCheck the pod deployment status.
kubectl get pod -l app=ossfs2-testExpected output:
NAME READY STATUS RESTARTS AGE ossfs2-test-0 1/1 Running 0 12mVerify that the application can access the data in OSS.
kubectl exec -it ossfs2-test-0 -- ls /dataThe output should show the data in the OSS mount path.
Apply in production
Category | Best practices |
Security and permissions |
|
Performance and cost |
|
O&M management |
|