ossfs 1.0 supports dynamically provisioned volumes. You can use a StorageClass and a PersistentVolumeClaim (PVC) to automatically create a persistent volume (PV) and mount an OSS Bucket. This feature simplifies storage management by removing the need to manually configure PVs. It is ideal for multi-tenant environments and scenarios that require frequent, on-demand storage creation.
Prerequisites
Ensure your cluster and Container Storage Interface (CSI) components (csi-plugin and csi-provisioner) meet the version requirements:
Mount using RAM Roles for Service Accounts (RRSA) authentication: Your cluster must be version 1.26 or later, and your CSI version must be v1.30.4 or later.
If you used the RRSA feature in a version earlier than 1.30.4, you must add RAM Role authorization configurations as described in[Product Change] CSI ossfs version upgrade and mount process optimization.
Use AccessKey: For stable mounts, we recommend CSI v1.18.8.45 or later.
To upgrade your cluster, see Manually upgrade a cluster. To upgrade components, see Upgrade CSI components.
Starting from CSI v1.30.4-*, mounting OSS statically provisioned volumes depends on the csi-provisioner component.
Step 1: Choose an authentication method and prepare credentials
To access OSS Bucket resources securely, first configure an authentication mechanism.
RRSA authentication: Grants Pods temporary, automatically rotating RAM roles for fine-grained, application-level permission isolation. This method is more secure.
AccessKey authentication: Stores static, long-term keys in a Secret. This method is simpler to configure but less secure.
In clusters version 1.26 and later, we recommend using RRSA authentication to avoid service interruptions caused by
ossfsremounts when an AccessKey is rotated.This guide assumes the cluster and the OSS Bucket are under the same Alibaba Cloud account. To mount an OSS Bucket across accounts, we recommend using RRSA authentication.
Use RRSA
1. Enable RRSA in your cluster
On the ACK Clusters page, find the cluster you want and click its name. In the left-side pane, click Cluster Information.
On the Basic Information tab, find the Security and Auditing section. To the right of RRSA OIDC, click Enable. Follow the on-screen prompts to enable RRSA during off-peak hours.
When the cluster status changes from Updating to Running, RRSA has been successfully enabled.
ImportantAfter you enable RRSA, the maximum validity period for new ServiceAccount tokens created in the cluster is limited to 12 hours.
2. Create and authorize a RAM role
Create a RAM role that your Pods can assume to access the OSS volume.
Use AccessKey
Create a RAM user with OSS access permissions and obtain its AccessKey. This grants the user permissions to perform operations on the OSS Bucket.
Create a RAM user (skip this step if you already have one).
Go to the Create User page in the RAM console. Follow the on-screen instructions to create a RAM user. You must set a logon name and password.
Create an access policy.
This example follows the principle of least privilege. Create a custom policy to grant permissions to access the target OSS Bucket (read-only or read/write permissions).
Go to the Create Policy page in the RAM console. Switch to the JSON tab and enter the policy script.
OSS read-only policy
Replace
<myBucketName>with the actual bucket name.{ "Statement": [ { "Action": [ "oss:Get*", "oss:List*" ], "Effect": "Allow", "Resource": [ "acs:oss:*:*:<myBucketName>", "acs:oss:*:*:<myBucketName>/*" ] } ], "Version": "1" }OSS read/write policy
Replace
<myBucketName>with the actual bucket name.{ "Statement": [ { "Action": "oss:*", "Effect": "Allow", "Resource": [ "acs:oss:*:*:<myBucketName>", "acs:oss:*:*:<myBucketName>/*" ] } ], "Version": "1" }When you create a PV in the console, you also need the
oss:ListBucketspermission.{ "Effect": "Allow", "Action": "oss:ListBuckets", "Resource": "*" }(Optional) If you use a customer master key (CMK) ID managed by KMS to encrypt OSS objects, you must also configure KMS permissions for the RAM user. For more information, see Use a specified CMK ID managed by KMS for encryption.
Grant the policy to the RAM user.
Go to the Users page in the RAM console. In the Actions column for the target user, click Add Permissions.
In the Policy section, search for and select the policy that you created in the previous step, and then add it to the permissions.
Create an AccessKey for the RAM user. You will store it as a secret for the PV to use.
Go to the Users page in the RAM console. Click the target user. Then, in the AccessKey section, click Create AccessKey.
In the dialog box that appears, follow the on-screen instructions to create an AccessKey. You must obtain and securely store the AccessKey ID and AccessKey secret.
Step 1: Create a StorageClass
Create a StorageClass to define a template for creating persistent volumes.
RRSA method
Create a file named
sc-oss.yaml.apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: sc-oss parameters: # Replace with your actual bucket name. bucket: bucket # The root directory or a specified subdirectory of the bucket to mount. path: / # The endpoint of the region where the bucket is located. url: "http://oss-cn-hangzhou-internal.aliyuncs.com" # Use the RRSA method for authentication. authType: rrsa # The RAM role that you created or modified. roleName: demo-role-for-rrsa # Custom parameters. otherOpts: "-o umask=022 -o max_stat_cache_size=100000 -o allow_other" # The access mode of the volume. volumeAs: sharepath # This value is fixed when you use the Alibaba Cloud OSS CSI plug-in. provisioner: ossplugin.csi.alibabacloud.com # The reclaim policy for the dynamically provisioned PV. reclaimPolicy: Retain # The binding mode. volumeBindingMode: ImmediateParameter
Description
bucketThe OSS Bucket to be mounted.
pathRequires CSI component version v1.14.8.32-c77e277b-aliyun or later.
Specifies the mount path relative to the bucket's root directory. Defaults to
/, which mounts the entire bucket.If the ossfs version is earlier than 1.91, the specified
pathmust already exist in the OSS Bucket. For details, see New features in ossfs 1.91 and later.urlThe access endpoint for the OSS bucket.
Use an internal endpoint if your cluster nodes and the bucket are in the same region, or if a Virtual Private Cloud (VPC) connection is established.
Use a public endpoint if the mount node and the bucket are in different regions.
The following are common formats for different access endpoints:
Internal:
http://oss-{{regionName}}-internal.aliyuncs.comorhttps://oss-{{regionName}}-internal.aliyuncs.com.The internal access endpoint format
vpc100-oss-{{regionName}}.aliyuncs.comis deprecated. Switch to the new format as soon as possible.Public:
http://oss-{{regionName}}.aliyuncs.comorhttps://oss-{{regionName}}.aliyuncs.com.
authTypeSet to
rrsato use RRSA authentication.roleNameSet this to the RAM role that you created or modified.
To configure different permissions for different StorageClasses, create different RAM roles and specify different
roleNamevalues in the StorageClasses.otherOptsEnter custom parameters for the OSS volume in the format
-o *** -o ***, such as-o umask=022 -o max_stat_cache_size=100000 -o allow_other.provisionerThe driver type. This value is fixed at
ossplugin.csi.alibabacloud.comwhen you use the Alibaba Cloud OSS CSI plug-in.reclaimPolicyThe reclaim policy for the dynamically provisioned PV. OSS persistent volumes currently support only
Retain. This means that when you delete the PVC, the PV and the data in the OSS Bucket are not deleted.volumeBindingModeThe association mode.
OSS persistent volumes do not require zone-based node affinity. You can use the default value
Immediate.volumeAsThe access mode of the volume. The default value is
sharepath. Valid values:subpathtakes effect only when the CSI component version is 1.31.3 or later. Otherwise,sharepathis used.sharepath: Mounts in shared mode. All volumes share the mount path. Data is stored in<bucket>:<path>/.subpath: Mounts in subdirectory mode. A subdirectory is automatically created under the mount path when a volume is created. Data is stored in<bucket>:<path>/<pv-name>/.
sigVersionThe signature version for requests to the OSS server.
"v1"(default): Uses OSS Signature Version 1."v4"(recommended): Uses OSS Signature Version 4.
Create the StorageClass.
kubectl apply -f sc-oss.yaml
AccessKey method
kubectl
1. Create a StorageClass
Create a secret. The namespace of the secret must be the same as the namespace of your application.
Replace
<yourAccessKey ID>and<yourAccessKey Secret>with the AccessKey ID and AccessKey secret that you obtained.kubectl create secret generic oss-secret --from-literal='akId=<yourAccessKey ID>' --from-literal='akSecret=<yourAccessKey Secret>'Create the StorageClass.
Create a file named
sc-oss.yaml.apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: sc-oss parameters: # Replace with your actual bucket name. bucket: bucket # The root directory or a specified subdirectory of the bucket to mount. path: / # The endpoint of the region where the bucket is located. url: "http://oss-cn-hangzhou-internal.aliyuncs.com" # The name of the secret that stores the AccessKey information. csi.storage.k8s.io/node-publish-secret-name: oss-secret # The namespace where the secret that stores the AccessKey information resides. csi.storage.k8s.io/node-publish-secret-namespace: default # Custom parameters. otherOpts: "-o umask=022 -o max_stat_cache_size=100000 -o allow_other" # This value is fixed when you use the Alibaba Cloud OSS CSI plug-in. provisioner: ossplugin.csi.alibabacloud.com # The reclaim policy for the dynamically provisioned PV. reclaimPolicy: Retain # The binding mode. volumeBindingMode: ImmediateParameter
Description
nameThe name of the StorageClass.
bucketThe OSS Bucket to be mounted.
pathRequires CSI component version v1.14.8.32-c77e277b-aliyun or later.
Specifies the mount path relative to the bucket's root directory. Defaults to
/, which mounts the entire bucket.If the ossfs version is earlier than 1.91, the specified
pathmust already exist in the OSS Bucket. For details, see New features in ossfs 1.91 and later.urlThe access endpoint for the OSS bucket.
Use an internal endpoint if your cluster nodes and the bucket are in the same region, or if a Virtual Private Cloud (VPC) connection is established.
Use a public endpoint if the mount node and the bucket are in different regions.
The following are common formats for different access endpoints:
Internal:
http://oss-{{regionName}}-internal.aliyuncs.comorhttps://oss-{{regionName}}-internal.aliyuncs.com.The internal access endpoint format
vpc100-oss-{{regionName}}.aliyuncs.comis deprecated. Switch to the new format as soon as possible.Public:
http://oss-{{regionName}}.aliyuncs.comorhttps://oss-{{regionName}}.aliyuncs.com.
csi.storage.k8s.io/node-publish-secret-nameThe name of the secret that stores the AccessKey information.
csi.storage.k8s.io/node-publish-secret-namespaceThe namespace where the secret that stores the AccessKey information resides.
otherOptsEnter custom parameters for the OSS volume in the format
-o *** -o ***, such as-o umask=022 -o max_stat_cache_size=100000 -o allow_other.provisionerThe driver type. This value is fixed at
ossplugin.csi.alibabacloud.comwhen you use the Alibaba Cloud OSS CSI plug-in.reclaimPolicyThe reclaim policy for the dynamically provisioned PV. OSS persistent volumes currently support only
Retain. This means that when you delete the PVC, the PV and the data in the OSS Bucket are not deleted.volumeBindingModeThe association mode.
OSS persistent volumes do not require zone-based node affinity. You can use the default value
Immediate.volumeAsThe access mode of the volume. The default value is
sharepath. Valid values:subpathtakes effect only when the CSI component version is 1.31.3 or later. Otherwise,sharepathis used.sharepath: Mounts in shared mode. All volumes share the mount path. Data is stored in<bucket>:<path>/.subpath: Mounts in subdirectory mode. A subdirectory is automatically created under the mount path when a volume is created. Data is stored in<bucket>:<path>/<pv-name>/.
sigVersionThe signature version for requests to the OSS server.
"v1"(default): Uses OSS Signature Version 1."v4"(recommended): Uses OSS Signature Version 4.
Create the StorageClass.
kubectl apply -f sc-oss.yaml
Console
Store the AccessKey that you obtained in Step 1 as a secret to be used by the PV.
On the Clusters page, click the name of the one you want to change. In the left navigation pane, choose .
Click Create from YAML, follow the on-screen instructions to create a secret .
apiVersion: v1 kind: Secret metadata: name: oss-secret # Must be the same as the namespace where the application resides namespace: default stringData: # Replace with the AccessKey ID you obtained akId: <your AccessKey ID> # Replace with the AccessKey secret you obtained akSecret: <your AccessKey Secret>
On the Clusters page, find the cluster you want and click its name. In the left-side pane, choose .
On the StorageClasses page, click Create. Set PV Type to OSS and configure the StorageClass parameters as prompted.
Configuration Item
Description
Access Certificate
Configure the secret required to access OSS. This is the AccessKey ID and AccessKey secret that you obtained.
Bucket ID
The OSS Bucket to use.
Only buckets that can be accessed with the configured AccessKey are displayed here.
OSS Path
Requires CSI component version v1.14.8.32-c77e277b-aliyun or later.
Specifies the mount path relative to the bucket's root directory. Defaults to
/, which mounts the entire bucket.If the ossfs version is earlier than 1.91, the specified
pathmust already exist in the OSS Bucket. For details, see New features in ossfs 1.91 and later.Volume Mode
The access mode of the volume. The default mode is Shared Directory. Valid values:
Subdirectory mode takes effect only when the CSI component version is 1.31.3 or later. Otherwise, the Shared Directory mode is used.
Shared Directory (
sharepath): All volumes share the mount path. Data is stored in<bucket>:<path>/.Subdirectory (
subpath): A subdirectory is automatically created under the mount path when a volume is created. Data is stored in<bucket>:<path>/<pv-name>/.
Endpoint
The access endpoint for the OSS bucket.
Use an internal endpoint if your cluster nodes and the bucket are in the same region, or if a Virtual Private Cloud (VPC) connection is established.
Use a public endpoint if the mount node and the bucket are in different regions.
The following are common formats for different access endpoints:
Internal:
http://oss-{{regionName}}-internal.aliyuncs.comorhttps://oss-{{regionName}}-internal.aliyuncs.com.The internal access endpoint format
vpc100-oss-{{regionName}}.aliyuncs.comis deprecated. Switch to the new format as soon as possible.Public:
http://oss-{{regionName}}.aliyuncs.comorhttps://oss-{{regionName}}.aliyuncs.com.
When you access over an internal network, the HTTP protocol is used by default. To use HTTPS, use the kubectl method.
Reclaim Policy
The reclaim policy for the dynamically provisioned PV. OSS persistent volumes currently support only
Retain. This means that when you delete the PVC, the PV and the data in the OSS Bucket are not deleted.Optional Parameters
Enter custom parameters for the OSS volume in the format
-o *** -o ***, such as-o umask=022 -o max_stat_cache_size=100000 -o allow_other.
Step 2: Create a PVC
Create a PVC to dynamically request storage resources. The CSI plug-in automatically creates a PV based on the StorageClass.
kubectl
Create a file named
pvc-oss.yaml.apiVersion: v1 kind: PersistentVolumeClaim metadata: # The name of the PVC. name: pvc-oss spec: # Configure the access mode. ReadOnlyMany indicates that ossfs mounts the OSS Bucket in read-only mode. accessModes: - ReadOnlyMany volumeMode: Filesystem resources: requests: # Declare the storage capacity. This value cannot be greater than the total volume size. storage: 20Gi # Declare the referenced StorageClass. storageClassName: sc-ossParameter
Description
accessModesConfigure the access mode.
ReadOnlyManyandReadWriteManyare supported.If you select
ReadOnlyMany, ossfs mounts the OSS Bucket in read-only mode.storageDeclare the requested storage capacity for the volume. This value does not limit the actual capacity of the OSS persistent volume.
storageClassNameThe referenced StorageClass.
Create the PVC.
kubectl apply -f pvc-oss.yamlConfirm that the PVC is created and in the Bound state.
kubectl get pvc pvc-ossExpected output:
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS VOLUMEATTRIBUTESCLASS AGE pvc-oss Bound oss-251d111d-3b0b-4879-81a0-eb5a19xxxxxx 20Gi ROX sc-oss <unset> 4d20h
Console
On the Clusters page, find the cluster you want and click its name. In the left navigation pane, choose .
On the Persistent Volume Claims page, click Create. Set PVC Type to OSS and configure the PVC parameters as prompted.
Parameter
Description
Allocation Mode
Select Use StorageClass.
Existing StorageClass
Click Select and select the StorageClass that you created.
Capacity
Declare the requested storage capacity for the volume. This value does not limit the actual capacity of the OSS persistent volume.
Access Mode
Configure the access mode.
ReadOnlyManyandReadWriteManyare supported.If you select
ReadOnlyMany, ossfs mounts the OSS Bucket in read-only mode.
Step 4: Create an application and mount the volume
Reference the PVC in your application to complete the mount.
kubectl
Create a file named
oss-static.yaml.apiVersion: apps/v1 kind: Deployment metadata: name: oss-static labels: app: nginx spec: replicas: 2 selector: matchLabels: app: nginx template: metadata: labels: app: nginx spec: containers: - name: nginx image: anolis-registry.cn-zhangjiakou.cr.aliyuncs.com/openanolis/nginx:1.14.1-8.6 ports: - containerPort: 80 volumeMounts: # The mount path in the container - name: pvc-oss mountPath: "/data" # Configure a health check livenessProbe: exec: command: - ls - /data initialDelaySeconds: 30 periodSeconds: 30 volumes: - name: pvc-oss persistentVolumeClaim: # Reference the PVC you created claimName: pvc-ossCreate the application.
kubectl create -f oss-static.yamlVerify the mount result.
Confirm that the Pods are in the
Runningstate.kubectl get pod -l app=nginxEnter a Pod and inspect the mount point.
kubectl exec -it <pod-name> -- ls /dataThe output should show the data from the OSS mount path.
Console
On the Clusters page, find the cluster you want and click its name. In the left-side pane, choose .
On the Deployments page, click Create from Image.
Configure the application parameters as prompted.
The key parameters are described below. You can keep the default values for other parameters. For details, see Create a stateless workload (Deployment).
Configuration step
Parameter
Description
Basic Information
Replicas
The number of replicas for the Deployment.
Container
Image Name
The address of the image used to deploy the application, such as
anolis-registry.cn-zhangjiakou.cr.aliyuncs.com/openanolis/nginx:1.14.1-8.6.Required Resources
The required vCPU and memory resources.
Volume
Click Add PVC and configure the parameters.
Mount Source: Select the PVC you created earlier.
Container Path: Enter the path inside the container where the OSS volume should be mounted, such as
/data.
Advanced
Pod Labels
For example, a label with the name
appand valuenginx.Check the application deployment status.
On the Deployments page, click the application name. On the Pods tab, confirm that the pods are running normally (Status is Running).
Step 5: Verify shared and persistent storage
Verify shared storage
Create a file in one Pod and then view it in another to verify the shared storage feature.
View the Pod information and get the Pod names from the output.
kubectl get pod -l app=nginxCreate a file named
tmpfilein one of the pods. For a Pod namedoss-static-66fbb85b67-d****:ReadWriteMany: Create atmpfilefile in the/datapath.kubectl exec oss-static-66fbb85b67-d**** -- touch /data/tmpfileReadOnlyMany: Upload thetmpfileto the corresponding path in the OSS Bucket using the OSS console or by uploading a file with cp.
View the file from the mount path of another Pod.
For a Pod named
oss-static-66fbb85b67-l****with a mount path of/data:kubectl exec oss-static-66fbb85b67-l**** -- ls /data | grep tmpfileThe output
tmpfileconfirms that the Pods share data.tmpfileIf you do not see the expected output, confirm that your CSI component version is v1.20.7 or later.
Verify persistent storage
Delete and recreate a Pod, then check if the file still exists in the new Pod to verify data persistence.
Delete an application Pod to trigger a rebuild.
kubectl delete pod oss-static-66fbb85b67-d****Check the Pods and wait for the new Pod to start and enter the
Runningstate.kubectl get pod -l app=nginxCheck for the file in the
/datapath.For a new Pod named
oss-static-66fbb85b67-z****with a mount path of/data:kubectl exec oss-static-66fbb85b67-z**** -- ls /data | grep tmpfileThe output
tmpfileconfirms that the file still exists, indicating that the data is persisted.tmpfile
Important considerations
Data integrity risks
Concurrent write consistency risk: To improve write stability, we recommend upgrading the CSI components to v1.28 or later. However, for single-file concurrent write scenarios, OSS's "overwrite upload" feature can still lead to data being overwritten. You must ensure data consistency at the application layer.
Data synchronization and accidental deletion risk: When a volume is mounted, any file deletions or modifications in the mount path on the application Pod or host node synchronize directly with the source files in the OSS Bucket. To prevent accidental data loss, we recommend enabling Versioning for your OSS Bucket.
Application stability risks
Out of Memory (OOM) risk: When performing a
readdiroperation (like thelscommand in a shell script) on a large number of files for the first time (e.g., over 100,000, depending on node memory), ossfs may consume a large amount of memory by loading all metadata at once. This can trigger an Out of Memory (OOM) error, killing the process and making the mount point unavailable.It is recommended to mount a subdirectory of the OSS Bucket or optimize the directory structure to mitigate this risk.
Increased mount time: Configuring
securityContext.fsgroupin your application causes kubelet to recursively change file permissions (chmod/chown) when mounting the volume. If there is a large number of files, this significantly increases mount time and can cause severe Pod startup delays.If you need to configure this parameter and reduce mount time, see Increased mount time for OSS volumes.
Key invalidation risk (AccessKey authentication): If the AccessKey becomes invalid or its permissions change, the application immediately loses access.
To restore access, you must update the credentials in the Secret and restart the application Pod to force a remount, which will cause a service interruption. Perform this operation during a maintenance window. For details, see Solutions.
Cost risks
Part costs:
ossfsuploads files larger than 10 MB in parts. If an upload is unexpectedly interrupted (e.g., due to an application restart), you must manually delete the parts or delete them using lifecycle rules. This prevents incomplete parts from occupying storage space and incurring costs.
Related documentation
You can manage OSS volumes through Container Network File System (CNFS) to improve performance and QoS control. For details, see Manage the lifecycle of OSS volumes.
To protect sensitive data at rest in OSS, we recommend enabling Server-Side Encryption. For details, see Encrypt ossfs 1.0 volumes.
For frequently asked questions about ossfs and OSS, see ossfs 1.0 (default) and ossfs 1.0 volume FAQ.
Enable container storage monitoring and configure alerts to promptly detect volume anomalies or performance bottlenecks.
ossfs 1.0 provides more reliable data consistency for random and concurrent write scenarios than ossfs 2.0. However, ossfs 2.0 offers better performance for sequential read and write scenarios.