For applications that require persistent storage or data sharing between pods, you can mount an Object Storage Service (OSS) bucket as an ossfs 2.0 volume using a dynamically provisioned persistent volume (PV). This approach uses a StorageClass as a template to automatically create and bind a PV, which not only simplifies storage management but also allows applications to read and write data in OSS using standard POSIX interfaces, just like a local filesystem.
Compared to ossfs 1.0, ossfs 2.0 excels in sequential read and write performance, making it ideal for leveraging the high bandwidth of OSS.
For details on performance, see ossfs 2.0 client performance benchmarks.
Workflow overview
The following figure and steps describe the main workflow for mounting a dynamically provisioned ossfs 2.0 volume in a Container Service for Kubernetes (ACK) cluster.
|
Considerations
Workload suitability: ossfs 2.0 is designed for read-only and sequential-append write scenarios. For random or concurrent write scenarios, data consistency cannot be guaranteed. We recommend using ossfs 1.0 for these cases.
Data safety: Any modification or deletion of files in an ossfs mount point (either from within the pod or on the host node) is immediately synchronized with the source OSS bucket. To prevent accidental data loss, we recommend enabling versioning for the bucket.
Application health checks: Configure a health check (liveness probe) for pods that use OSS volumes. For example, verify that the mount point is still accessible. If the mount becomes unhealthy, the pod will be automatically restarted to restore connectivity.
Multipart upload management: When uploading large files (> 10 MB), ossfs automatically uses multipart uploads. If an upload is interrupted, incomplete parts will remain in the bucket. Manually delete these parts or configure a lifecycle rule to automatically clean up these parts to save storage costs.
Method 1: Authenticate using RRSA
By leveraging RRSA, you can achieve fine-grained, PV-level permission isolation for accessing cloud resources, which significantly reduces security risks. For details, see Use RRSA to authorize different pods to access different cloud services.
Prerequisites
You have an ACK cluster running Kubernetes 1.26 or later. Manually upgrade the cluster if needed.
The Container Storage Interface (CSI) plugin version is 1.33.1 or later. To upgrade it, see Update csi-plugin and csi-provisioner.
If you use RRSA in a version earlier than 1.30.4, see [Product Changes] Version upgrade and mounting process optimization of ossfs in CSI to configure the RAM role authorization.
You have created an OSS bucket in the same Alibaba Cloud account as your cluster.
To mount an OSS bucket across accounts, we recommend using the RRSA authentication method. For details, see FAQ about ossfs 2.0 volumes
Step 1: Create a RAM role
If you have mounted an OSS volume in the cluster using RRSA, skip this step. If this is your first time, follow these steps:
Enable the RRSA feature in the ACK console.
Create a RAM role for an OIDC IdP. This role will be assumed using RRSA.
The following table lists the key parameters for the sample role
demo-role-for-rrsa:Parameter
Description
Identity Provider Type
Select OIDC.
Identity Provider
Select the provider associated with your cluster, such as
ack-rrsa-<cluster_id>, where<cluster_id>is your actual cluster ID.Condition
oidc:iss: Keep the default value.
oidc:aud: Keep the default value.
oidc:sub: Manually add the following condition:
Key: Select oidc:sub.
Operator: Select StringEquals.
Value: Enter
system:serviceaccount:ack-csi-fuse:csi-fuse-ossfs.In this value,
ack-csi-fuseis the namespace where the ossfs client is located, and cannot be customized.csi-fuse-ossfsis the service account name, and can be changed as needed.For more information about how to modify the service account name, see FAQ about ossfs 2.0 volumes
Role Name
Enter
demo-role-for-rrsa.
Step 2: Grant permissions to the demo-role-for-rrsa role
Create a custom policy to grant OSS access permissions to the RAM user. For more information, see Create custom policies.
Select the read-only policy or read-write policy based on your business requirements. Replace
mybucketwith the name of the bucket you created.Policy that provides read-only permissions on OSS
Policy that provides read-write permissions on OSS
Optional. If the objects in the OSS bucket are encrypted by using a specified customer master key (CMK) in Key Management Service (KMS), you need to grant KMS access permissions to the RAM user. For more information, see Encryption.
Grant the required permissions to the demo-role-for-rrsa role. For more information, see Grant permissions to a RAM role.
NoteIf you want to use an existing RAM role that has OSS access permissions, you can modify the trust policy of the RAM role. For more information, see Use an existing RAM role and grant the required permissions to the RAM role.
Step 3: Create a StorageClass
Create a StorageClass to use as a template for dynamically provisioning PVs.
Create a file named
ossfs2-sc-rrsa.yamlwith the following content:apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: ossfs2-sc # The name of the StorageClass. parameters: bucket: cnfs-oss-test # The name of the bucket. path: /subpath # The subdirectory to mount. If left empty, the root directory is mounted. url: oss-cn-hangzhou-internal.aliyuncs.com # The endpoint of the region where the OSS Bucket is located. authType: rrsa roleName: demo-role-for-rrsa # The RAM role created earlier. fuseType: ossfs2 volumeAs: sharepath otherOpts: "-o close_to_open=false" provisioner: ossplugin.csi.alibabacloud.com # This value is fixed. reclaimPolicy: Retain # The reclaim policy for the dynamically provisioned PV. Only Retain is supported, which means the PV and the data in the OSS Bucket are not deleted when the PVC is deleted. volumeBindingMode: Immediate # The volume binding mode. OSS volumes do not require zone-based node affinity, so you can use the default value Immediate.The following table describes the parameters in the
parametersfield:Parameter
Required
Description
bucketYes
The name of the OSS bucket to mount.
pathYes
The base path within the bucket. When
volumeAsis set tosharepath, each dynamically provisioned PV is assigned a unique subdirectory under this path, such as/ack/<pv-name>.urlYes
The access endpoint for the OSS bucket.
Use an internal endpoint if your cluster nodes and the bucket are in the same region, or if a Virtual Private Cloud (VPC) connection is established.
Use a public endpoint if the mount node and the bucket are in different regions.
The following are common formats for different access endpoints:
Internal:
http://oss-{{regionName}}-internal.aliyuncs.comorhttps://oss-{{regionName}}-internal.aliyuncs.com.The internal access endpoint format
vpc100-oss-{{regionName}}.aliyuncs.comis deprecated. Switch to the new format as soon as possible.Public:
http://oss-{{regionName}}.aliyuncs.comorhttps://oss-{{regionName}}.aliyuncs.com.
fuseTypeYes
Specifies the client to use for the mount. Must be set to
ossfs2to use the ossfs 2.0 client.authTypeYes
Set to
rrsato declare that the RRSA authentication method is used.roleNameYes
Set to the name of the RAM role you created or modified.
To assign different permissions for different PVs, create a separate RAM role for each permission set. Then, in each PV's definition, use the
roleNameparameter to associate it with the corresponding role.volumeAsNo
Defines how the PV is provisioned.
sharepathindicates that each PV creates a separate subdirectory in the directory specified bypath.otherOptsNo
Additional mount options passed to the OSS volume, specified as a string of space-separated flags in the format
-o *** -o ***. For example,-o close_to_open=false.The
close-to-openoption controls metadata consistency. When set tofalse(default), metadata is cached to improve performance for small file reads. When set totrue, the client fetches fresh metadata from OSS by sending aGetObjectMetarequest every time a file is opened, ensuring real-time consistency at the cost of increased latency and API calls.For more optional parameters, see ossfs 2.0 mount options.
Apply the StorageClass.
kubectl create -f ossfs2-sc-rrsa.yamlCheck the StorageClass status.
kubectl get sc ossfs2-scExpected output:
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE ossfs2-sc ossplugin.csi.alibabacloud.com Retain Immediate 10s
Step 4: Create a PVC
Create a PVC to request storage resources from the StorageClass. This operation triggers the automatic creation of the underlying PV.
Create a file named
ossfs2-pvc-dynamic.yamlwith the following content:kind: PersistentVolumeClaim apiVersion: v1 metadata: name: pvc-ossfs2 # The name of the PVC. namespace: default spec: accessModes: - ReadWriteMany resources: requests: storage: 20Gi storageClassName: ossfs2-sc # The StorageClass created earlier.Create the PVC.
kubectl create -f ossfs2-pvc-dynamic.yamlVerify that the PVC is
Bound, indicating that it is bound to the automatically created PV.kubectl get pvc pvc-ossfs2Expected output:
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE pvc-ossfs2 Bound d-bp17y03tpy2b8x****** 20Gi RWX ossfs2-sc 25s
Step 5: Create an application and mount the volume
You can now mount the storage resources to which the PVC is bound in your application.
Create a file named
ossfs2-test.yamlthat has the following content.The following YAML template creates a StatefulSet that consists of one pod. The pod requests storage resources through a PVC named
pvc-ossfs2and mounts the volume to the/datapath.Deploy the application.
kubectl create -f ossfs2-test.yamlCheck the deployment status of the pod.
kubectl get pod -l app=ossfs2-testExpected output:
NAME READY STATUS RESTARTS AGE ossfs2-test-0 1/1 Running 0 2mVerify that you can read from and write to the mount point.
# Write a test file to the mount target. kubectl exec -it ossfs2-test-0 -- touch /data/test.txt # View the content of the mount target. kubectl exec -it ossfs2-test-0 -- ls /dataThe output should show the
test.txtfile you created, indicating that the volume is mounted and you have write permissions.
Method 2: Authenticate using an AccessKey
ACK supports authenticating OSS volume mounts by storing a static AccessKey in a Kubernetes Secret. This approach is suitable for scenarios where a specific application requires long-term, fixed access permissions.
If the AccessKey referenced by the PV is revoked or its permissions are changed, any application using the volume will immediately lose access and encounter permission errors. To restore access, update the credentials in the Secret, then restart the application's pods to force a remount. This process causes a brief service interruption and should only be performed during a scheduled maintenance window.
To avoid the downtime associated with manual key rotation, we strongly recommend using the RRSA authentication method instead.
Prerequisites
You have an ACK cluster with CSI V1.33.1 or later installed. To upgrade the component, see Update csi-plugin and csi-provisioner.
You have an OSS bucket in the same Alibaba Cloud account as your cluster.
To mount an OSS bucket across accounts, we recommend using RRSA for authentication. For details, see FAQ about ossfs 2.0 volumes
Step 1: Create a RAM user with OSS access permissions and obtain an AccessKey pair
Create a RAM user and grant permissions
Create a RAM user. You can skip this step if you have an existing RAM user. For more information about how to create a RAM user, see Create a RAM user.
Create a custom policy to grant OSS access permissions to the RAM user. For more information, see Create custom policies.
Select the read-only policy or read-write policy based on your business requirements. Replace
mybucketwith the name of the bucket you created.Policy that provides read-only permissions on OSS
Policy that provides read-write permissions on OSS
Optional. If the objects in the OSS bucket are encrypted by using a specified customer master key (CMK) in Key Management Service (KMS), you need to grant KMS access permissions to the RAM user. For more information, see Encryption.
Grant OSS access permissions to the RAM user. For more information, see Grant permissions to a RAM user.
Create an AccessKey pair for the RAM user. For more information, see Create an AccessKey pair.
Create a Secret to store the AccessKey credentials
Run the following command to create a Secret for OSS authentication. Replace akId and akSecret with your actual credentials.
kubectl create -n default secret generic oss-secret --from-literal='akId=xxxxxx' --from-literal='akSecret=xxxxxx'Step 2: Create a StorageClass
Create a file named
ossfs2-sc.yamlwith the following content:apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: ossfs2-sc parameters: # Use the Secret created in the preparations. csi.storage.k8s.io/node-publish-secret-name: oss-secret csi.storage.k8s.io/node-publish-secret-namespace: default fuseType: ossfs2 bucket: cnfs-oss-test # The name of the bucket. path: /subpath # The subdirectory to mount. If left empty, the root directory is mounted. url: oss-cn-hangzhou-internal.aliyuncs.com # The endpoint of the region where the OSS Bucket is located. otherOpts: "-o close_to_open=false" provisioner: ossplugin.csi.alibabacloud.com # This value is fixed. reclaimPolicy: Retain # The reclaim policy for the dynamically provisioned PV. Only Retain is supported, which means the PV and the data in the OSS Bucket are not deleted when the PVC is deleted. volumeBindingMode: Immediate # The volume binding mode. OSS volumes do not require zone-based node affinity, so you can use the default value Immediate.The following table describes the parameters in the
parametersfield:Secret configuration
Parameter
Required
Description
csi.storage.k8s.io/node-publish-secret-nameYes
The name of the Secret that stores the AccessKey information.
csi.storage.k8s.io/node-publish-secret-namespaceYes
The namespace where the Secret storing the AccessKey information is located.
Volume configuration
Parameter
Required
Description
fuseTypeYes
Specifies the client to use for the mount. Must be set to
ossfs2to use the ossfs 2.0 client.bucketYes
The name of the OSS bucket you want to mount.
pathNo
The mount path of the OSS bucket, which represents the directory structure relative to the bucket root.
urlYes
The endpoint of the OSS bucket you want to mount. You can retrieve the endpoint from the Overview page of the bucket in the OSS console.
If the bucket is mounted to a node in the same region as the bucket or the bucket can be connected to the node through a virtual private cloud (VPC), use the internal endpoint of the bucket.
If the bucket is mounted to a node in a different region, use the public endpoint of the bucket.
Public endpoints and internal endpoints have different formats:
Format of internal endpoints:
http://oss-{{regionName}}-internal.aliyuncs.comorhttps://oss-{{regionName}}-internal.aliyuncs.com.Format of public endpoints:
http://oss-{{regionName}}.aliyuncs.comorhttps://oss-{{regionName}}.aliyuncs.com.
ImportantThe
vpc100-oss-{{regionName}}.aliyuncs.comformat for internal endpoints is deprecated.otherOptsNo
Enter custom parameters for the OSS volume in the format
-o *** -o ***, for example,-o close_to_open=false.close-to-open: Disabled by default. If enabled, the system sends a GetObjectMeta request to OSS each time a file is opened to get the latest metadata. This ensures metadata is always up-to-date. However, in scenarios with many small file reads, frequent metadata queries can significantly increase access latency.
For more optional parameters, see ossfs 2.0 mount options.
Create the StorageClass.
kubectl create -f ossfs2-sc.yaml
Step 3: Create a PVC
Create a PVC to request storage resources from the StorageClass. This operation triggers the automatic creation of the underlying PV.
Create a file named
ossfs2-pvc-dynamic.yamlwith the following content:kind: PersistentVolumeClaim apiVersion: v1 metadata: name: pvc-ossfs2 # The name of the PVC. namespace: default spec: accessModes: - ReadWriteMany resources: requests: storage: 20Gi storageClassName: ossfs2-sc # The StorageClass created earlier.Create the PVC.
kubectl create -f ossfs2-pvc-dynamic.yamlVerify that the PVC is
Bound, indicating that it is bound to the automatically created PV.kubectl get pvc pvc-ossfs2Expected output:
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE pvc-ossfs2 Bound d-bp17y03tpy2b8x****** 20Gi RWX ossfs2-sc 25s
Step 4: Create an application and mount the volume
You can now mount the storage resources to which the PVC is bound in your application.
Create a file named
ossfs2-test.yamlthat has the following content.The following YAML template creates a StatefulSet that consists of one pod. The pod requests storage resources through a PVC named
pvc-ossfs2and mounts the volume to the/datapath.Deploy the application.
kubectl create -f ossfs2-test.yamlCheck the deployment status of the pod.
kubectl get pod -l app=ossfs2-testExpected output:
NAME READY STATUS RESTARTS AGE ossfs2-test-0 1/1 Running 0 2mVerify that you can read from and write to the mount point.
# Write a test file to the mount target. kubectl exec -it ossfs2-test-0 -- touch /data/test.txt # View the content of the mount target. kubectl exec -it ossfs2-test-0 -- ls /dataThe output should show the
test.txtfile you created, indicating that the volume is mounted and you have write permissions.
Apply in production
Category | Best practices |
Security and permissions |
|
Performance and cost |
|
O&M management |
|