With ossfs 1.0, you can mount an existing Object Storage Service (OSS) Bucket as persistent storage by creating a statically provisioned volume. This method is ideal for general-purpose use cases with concurrent reads, infrequent random writes, and modified file permissions, such as mounting configuration files, images, or video resources.
Prerequisites
Ensure your cluster and Container Storage Interface (CSI) components (csi-plugin and csi-provisioner) meet the version requirements:
Mount using RAM Roles for Service Accounts (RRSA) authentication: Your cluster must be version 1.26 or later, and your CSI version must be v1.30.4 or later.
If you used the RRSA feature in a version earlier than 1.30.4, you must add RAM Role authorization configurations as described in[Product Change] CSI ossfs version upgrade and mount process optimization.
Use AccessKey: For stable mounts, we recommend CSI v1.18.8.45 or later.
To upgrade your cluster, see Manually upgrade a cluster. To upgrade components, see Upgrade CSI components.
Starting from CSI v1.30.4-*, mounting OSS statically provisioned volumes depends on the csi-provisioner component.
Step 1: Choose an authentication method and prepare credentials
To access OSS Bucket resources securely, first configure an authentication mechanism.
RRSA authentication: Grants Pods temporary, automatically rotating RAM roles for fine-grained, application-level permission isolation. This method is more secure.
AccessKey authentication: Stores static, long-term keys in a Secret. This method is simpler to configure but less secure.
In clusters version 1.26 and later, we recommend using RRSA authentication to avoid service interruptions caused by
ossfsremounts when an AccessKey is rotated.This guide assumes the cluster and the OSS Bucket are under the same Alibaba Cloud account. To mount an OSS Bucket across accounts, we recommend using RRSA authentication.
Use RRSA
1. Enable RRSA in your cluster
On the Clusters page, find the cluster you want and click its name. In the left-side pane, click Cluster Information.
On the Basic Information tab, find the Security and Auditing section. To the right of RRSA OIDC, click Enable. Follow the on-screen prompts to enable RRSA during off-peak hours.
When the cluster status changes from Updating to Running, RRSA has been successfully enabled.
ImportantAfter you enable RRSA, the maximum validity period for new ServiceAccount tokens created in the cluster is limited to 12 hours.
Log on to the ACK console. In the navigation pane on the left, click Clusters.
2. Create and authorize a RAM role
Create a RAM role that your Pods can assume to access the OSS volume.
Use AccessKey
Create a RAM user with OSS access permissions and obtain its AccessKey. This grants the user permissions to perform operations on the OSS Bucket.
Create a RAM user (skip this step if you already have one).
Go to the Create User page in the RAM console. Follow the on-screen instructions to create a RAM user. You must set a logon name and password.
Create an access policy.
This example follows the principle of least privilege. Create a custom policy to grant permissions to access the target OSS Bucket (read-only or read/write permissions).
Go to the Create Policy page in the RAM console. Switch to the Script Editor tab and enter the policy script.
OSS read-only policy
Replace
<myBucketName>with the actual bucket name.{ "Statement": [ { "Action": [ "oss:Get*", "oss:List*" ], "Effect": "Allow", "Resource": [ "acs:oss:*:*:<myBucketName>", "acs:oss:*:*:<myBucketName>/*" ] } ], "Version": "1" }OSS read/write policy
Replace
<myBucketName>with the actual bucket name.{ "Statement": [ { "Action": "oss:*", "Effect": "Allow", "Resource": [ "acs:oss:*:*:<myBucketName>", "acs:oss:*:*:<myBucketName>/*" ] } ], "Version": "1" }When you create a PV in the console, you also need the
oss:ListBucketspermission.{ "Effect": "Allow", "Action": "oss:ListBuckets", "Resource": "*" }(Optional) If you use a customer master key (CMK) ID managed by KMS to encrypt OSS objects, you must also configure KMS permissions for the RAM user. For more information, see Use a specified CMK ID managed by KMS for encryption.
Grant the policy to the RAM user.
Go to the Users page in the RAM console. In the Actions column for the target user, click Add Permissions.
In the Access Policy section, search for and select the policy that you created in the previous step, and then add it to the permissions.
Create an AccessKey for the RAM user. You will store it as a secret for the PV to use.
Go to the Users page in the RAM console. Click the target user. Then, in the AccessKey section, click Create AccessKey.
In the dialog box that appears, follow the on-screen instructions to create an AccessKey. You must obtain and securely store the AccessKey ID and AccessKey secret.
Step 2: Create a PV
Create a Persistent Volume (PV) to register the existing OSS Bucket in your cluster.
RRSA method
Create a file named
pv-oss-rrsa.yaml.apiVersion: v1 kind: PersistentVolume metadata: # PV name name: pv-oss # PV labels labels: alicloud-pvname: pv-oss spec: capacity: # Define the volume capacity storage: 10Gi # Access mode accessModes: - ReadOnlyMany persistentVolumeReclaimPolicy: Retain csi: driver: ossplugin.csi.alibabacloud.com # Must be the same as the PV name (metadata.name) volumeHandle: pv-oss volumeAttributes: # Replace with the actual bucket name bucket: "your-bucket-name" # Mount the root directory or a specified subdirectory of the bucket path: / # The endpoint of the region where the bucket is located url: "http://oss-cn-hangzhou-internal.aliyuncs.com" otherOpts: "-o umask=022 -o max_stat_cache_size=100000 -o allow_other" authType: "rrsa" # The RAM role that you created or modified roleName: "demo-role-for-rrsa" # OSS request signature version sigVersion: "v4"Parameter
Description
storageDefines the capacity of the OSS volume. This value is used only to match the PV with a PVC.
accessModesConfigures the Access Mode. Supports
ReadOnlyManyandReadWriteMany.If you select
ReadOnlyMany, ossfs mounts the OSS Bucket in read-only mode.persistentVolumeReclaimPolicyThe PV reclaim policy. OSS volumes currently only support
Retain, meaning the PV and the data in the OSS Bucket are retained after the PVC is deleted.driverDefines the driver type. Must be
ossplugin.csi.alibabacloud.comwhen using the Alibaba Cloud OSS CSI plug-in.volumeHandleMust be the same as the PV name (
metadata.name).bucketThe OSS Bucket to be mounted.
pathRequires CSI component version v1.14.8.32-c77e277b-aliyun or later.
Specifies the mount path relative to the bucket's root directory. Defaults to
/, which mounts the entire bucket.If the ossfs version is earlier than 1.91, the specified
pathmust already exist in the OSS Bucket. For details, see New features in ossfs 1.91 and later.urlThe access endpoint for the OSS bucket.
Use an internal endpoint if your cluster nodes and the bucket are in the same region, or if a Virtual Private Cloud (VPC) connection is established.
Use a public endpoint if the mount node and the bucket are in different regions.
The following are common formats for different access endpoints:
Internal:
http://oss-{{regionName}}-internal.aliyuncs.comorhttps://oss-{{regionName}}-internal.aliyuncs.com.The internal access endpoint format
vpc100-oss-{{regionName}}.aliyuncs.comis deprecated. Switch to the new format as soon as possible.Public:
http://oss-{{regionName}}.aliyuncs.comorhttps://oss-{{regionName}}.aliyuncs.com.
otherOptsEnter custom parameters for the OSS volume in the format
-o *** -o ***, such as-o umask=022 -o max_stat_cache_size=100000 -o allow_other.authTypeSet to
rrsato use RRSA authentication.roleNameSet to the RAM role that you created or modified.
To configure different permissions for different PVs, create different RAM roles and specify different
roleNamevalues in the PVs.sigVersionThe signature version for requests to the OSS server.
"v1"(default): Uses OSS Signature Version 1."v4"(recommended): Uses OSS Signature Version 4.
If the default RRSA authentication does not meet your needs (such as if you use a non-default ServiceAccount or a third-party OIDC), you can modify the PV configuration to specify a specific ARN or ServiceAccount. For more information, see How do I use specified ARNs or ServiceAccounts with RRSA authentication?.
Create the PV.
kubectl create -f pv-oss-rrsa.yaml
AccessKey method
kubectl
Create a file named
oss-secret.yamlto store the AccessKey obtained in Step 1 as a secret for use by the PV.apiVersion: v1 kind: Secret metadata: name: oss-secret # Must be the same as the namespace where the application resides namespace: default stringData: # Replace with the AccessKey ID you obtained akId: <your AccessKey ID> # Replace with the AccessKey secret you obtained akSecret: <your AccessKey Secret>Create the Secret.
kubectl create -f oss-secret.yamlCreate a file named
pv-oss-ram.yaml.apiVersion: v1 kind: PersistentVolume metadata: # PV name name: pv-oss # PV labels labels: alicloud-pvname: pv-oss spec: capacity: storage: 10Gi accessModes: - ReadOnlyMany persistentVolumeReclaimPolicy: Retain csi: driver: ossplugin.csi.alibabacloud.com # Must be the same as the PV name (metadata.name) volumeHandle: pv-oss # Specify the secret object to obtain AccessKey information nodePublishSecretRef: name: oss-secret namespace: default volumeAttributes: # Replace with the actual bucket name bucket: "your-bucket-name" url: "http://oss-cn-hangzhou-internal.aliyuncs.com" otherOpts: "-o umask=022 -o max_stat_cache_size=100000 -o allow_other" path: "/"Parameter
Description
storageDefines the capacity of the OSS volume. This value is used only to match the PV with a PVC.
accessModesConfigures the Access Mode. Supports
ReadOnlyManyandReadWriteMany.If you select
ReadOnlyMany, ossfs mounts the OSS Bucket in read-only mode.persistentVolumeReclaimPolicyThe PV reclaim policy. OSS volumes currently only support
Retain, meaning the PV and the data in the OSS Bucket are retained after the PVC is deleted.driverDefines the driver type. Must be
ossplugin.csi.alibabacloud.comwhen using the Alibaba Cloud OSS CSI plug-in.nodePublishSecretRefSpecifies the Secret that provides the AccessKey information when mounting the PV.
volumeHandleMust be the same as the PV name (
metadata.name).bucketThe OSS Bucket to be mounted.
urlThe access endpoint for the OSS bucket.
Use an internal endpoint if your cluster nodes and the bucket are in the same region, or if a Virtual Private Cloud (VPC) connection is established.
Use a public endpoint if the mount node and the bucket are in different regions.
The following are common formats for different access endpoints:
Internal:
http://oss-{{regionName}}-internal.aliyuncs.comorhttps://oss-{{regionName}}-internal.aliyuncs.com.The internal access endpoint format
vpc100-oss-{{regionName}}.aliyuncs.comis deprecated. Switch to the new format as soon as possible.Public:
http://oss-{{regionName}}.aliyuncs.comorhttps://oss-{{regionName}}.aliyuncs.com.
otherOptsEnter custom parameters for the OSS volume in the format
-o *** -o ***, such as-o umask=022 -o max_stat_cache_size=100000 -o allow_other.pathRequires CSI component version v1.14.8.32-c77e277b-aliyun or later.
Specifies the mount path relative to the bucket's root directory. Defaults to
/, which mounts the entire bucket.If the ossfs version is earlier than 1.91, the specified
pathmust already exist in the OSS Bucket. For details, see New features in ossfs 1.91 and later.sigVersionThe signature version for requests to the OSS server.
"v1"(default): Uses OSS Signature Version 1."v4"(recommended): Uses OSS Signature Version 4.
Create the PV.
kubectl create -f pv-oss-ram.yaml
Console
Store the AccessKey obtained in Step 1 as a secret for the PV to use.
On the Clusters page, find the cluster you want and click its name. In the left-side pane, choose .
Click Create From YAML and follow the on-screen instructions to create a secret.
apiVersion: v1
kind: Secret
metadata:
name: oss-secret
# Must be the same as the namespace where the application resides
namespace: default
stringData:
# Replace with the AccessKey ID you obtained
akId: <your AccessKey ID>
# Replace with the AccessKey secret you obtained
akSecret: <your AccessKey Secret>On the Clusters page, find the cluster you want and click its name. In the left-side pane, choose .
On the PersistentVolumes page, click Create. Set Volume Type to OSS, configure the parameters, and then submit them.
The following table lists the key parameters.
Parameter
Description
Total Capacity
The capacity of the volume to create.
Access Mode
Configures the Access Mode. Supports
ReadOnlyManyandReadWriteMany.If you select
ReadOnlyMany, ossfs mounts the OSS Bucket in read-only mode.Access Credential
Configure the secret required to access OSS. This is the AccessKey ID and AccessKey secret obtained in Step 1.
Optional Parameters
Enter custom parameters for the OSS volume in the format
-o *** -o ***, such as-o umask=022 -o max_stat_cache_size=100000 -o allow_other.Bucket ID
The OSS Bucket to use.
Only buckets that can be accessed with the configured AccessKey are displayed here.
OSS Path
Requires CSI component version v1.14.8.32-c77e277b-aliyun or later.
Specifies the mount path relative to the bucket's root directory. Defaults to
/, which mounts the entire bucket.If the ossfs version is earlier than 1.91, the specified
pathmust already exist in the OSS Bucket. For details, see New features in ossfs 1.91 and later.Endpoint
The access endpoint for the OSS bucket.
Use an internal endpoint if your cluster nodes and the bucket are in the same region, or if a Virtual Private Cloud (VPC) connection is established.
Use a public endpoint if the mount node and the bucket are in different regions.
The following are common formats for different access endpoints:
Internal:
http://oss-{{regionName}}-internal.aliyuncs.comorhttps://oss-{{regionName}}-internal.aliyuncs.com.The internal access endpoint format
vpc100-oss-{{regionName}}.aliyuncs.comis deprecated. Switch to the new format as soon as possible.Public:
http://oss-{{regionName}}.aliyuncs.comorhttps://oss-{{regionName}}.aliyuncs.com.
When you access over an internal network, the HTTP protocol is used by default. To use HTTPS, use the kubectl method.
Step 3: Create a PVC
Create a PersistentVolumeClaim (PVC) to request the persistent storage capacity needed by your application.
kubectl
Create a file named
pvc-oss.yaml.apiVersion: v1 kind: PersistentVolumeClaim metadata: # PVC name name: pvc-oss namespace: default spec: # Configure the access mode. ReadOnlyMany indicates that ossfs will mount the OSS Bucket in read-only mode. accessModes: - ReadOnlyMany resources: requests: # Declare the storage capacity. This cannot be greater than the total volume capacity. storage: 10Gi selector: matchLabels: # Match the PV by its label alicloud-pvname: pv-ossCreate the PVC.
kubectl create -f pvc-oss.yamlCheck the PVC status.
kubectl get pvc pvc-ossThe output shows that the PVC is
Boundto the PV.NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS VOLUMEATTRIBUTESCLASS AGE pvc-oss Bound pv-oss 10Gi ROX <unset> 6s
Console
On the Clusters page, find the cluster you want and click its name. In the left-side pane, choose .
On the PersistentVolumeClaims page, click Create. Select OSS as the PVC Type and configure the parameters as prompted.
The following table lists the key parameters.
Parameter
Description
Provisioning Mode
Select Use Existing PersistentVolume.
If you have not created a PV, you can set the Provisioning Mode to Create PersistentVolume and configure the PV parameters.
Total Capacity
The capacity of the PVC, which cannot exceed the PV's capacity.
Step 4: Create an application and mount the volume
Reference the PVC in your application to complete the mount.
kubectl
Create a file named
oss-static.yaml.apiVersion: apps/v1 kind: Deployment metadata: name: oss-static labels: app: nginx spec: replicas: 2 selector: matchLabels: app: nginx template: metadata: labels: app: nginx spec: containers: - name: nginx image: anolis-registry.cn-zhangjiakou.cr.aliyuncs.com/openanolis/nginx:1.14.1-8.6 ports: - containerPort: 80 volumeMounts: # The mount path in the container - name: pvc-oss mountPath: "/data" # Configure a health check livenessProbe: exec: command: - ls - /data initialDelaySeconds: 30 periodSeconds: 30 volumes: - name: pvc-oss persistentVolumeClaim: # Reference the PVC you created claimName: pvc-ossCreate the application.
kubectl create -f oss-static.yamlVerify the mount result.
Confirm that the Pods are in the
Runningstate.kubectl get pod -l app=nginxEnter a Pod and inspect the mount point.
kubectl exec -it <pod-name> -- ls /dataThe output should show the data from the OSS mount path.
Console
On the Clusters page, find the cluster you want and click its name. In the left-side pane, choose .
In the upper-right corner of the Deployments page, click Create from Image.
Configure the application parameters as prompted.
The key parameters are described below. You can keep the default values for other parameters. For details, see Create a stateless workload (Deployment).
Configuration Page
Parameter
Description
Basic Information
Number Of Replicas
The number of replicas for the Deployment.
Container Configuration
Image Name
The address of the image used to deploy the application, such as
anolis-registry.cn-zhangjiakou.cr.aliyuncs.com/openanolis/nginx:1.14.1-8.6.Required Resources
The required vCPU and memory resources.
Volume
Click Add Cloud Storage Claim and configure the parameters.
Mount Source: Select the PVC you created earlier.
Container Path: Enter the path inside the container where the OSS volume should be mounted, such as
/data.
Labels And Annotations
Pod Label
For example, a label with the name
appand valuenginx.Check the application deployment status.
On the Deployments page, click the application name. On the Pods tab, confirm that the Pods are running normally (Status is Running).
Step 5: Verify shared and persistent storage
Verify shared storage
Create a file in one Pod and then view it in another to verify the shared storage feature.
View the Pod information and get the Pod names from the output.
kubectl get pod -l app=nginxCreate a file named
tmpfilein one of the pods. For a Pod namedoss-static-66fbb85b67-d****:ReadWriteMany: Create atmpfilefile in the/datapath.kubectl exec oss-static-66fbb85b67-d**** -- touch /data/tmpfileReadOnlyMany: Upload thetmpfileto the corresponding path in the OSS Bucket using the OSS console or by uploading a file with cp.
View the file from the mount path of another Pod.
For a Pod named
oss-static-66fbb85b67-l****with a mount path of/data:kubectl exec oss-static-66fbb85b67-l**** -- ls /data | grep tmpfileThe output
tmpfileconfirms that the Pods share data.tmpfileIf you do not see the expected output, confirm that your CSI component version is v1.20.7 or later.
Verify persistent storage
Delete and recreate a Pod, then check if the file still exists in the new Pod to verify data persistence.
Delete an application Pod to trigger a rebuild.
kubectl delete pod oss-static-66fbb85b67-d****Check the Pods and wait for the new Pod to start and enter the
Runningstate.kubectl get pod -l app=nginxCheck for the file in the
/datapath.For a new Pod named
oss-static-66fbb85b67-z****with a mount path of/data:kubectl exec oss-static-66fbb85b67-z**** -- ls /data | grep tmpfileThe output
tmpfileconfirms that the file still exists, indicating that the data is persisted.tmpfile
Important considerations
Data integrity risks
Concurrent write consistency risk: To improve write stability, we recommend upgrading the CSI components to v1.28 or later. However, for single-file concurrent write scenarios, OSS's "overwrite upload" feature can still lead to data being overwritten. You must ensure data consistency at the application layer.
Data synchronization and accidental deletion risk: When a volume is mounted, any file deletions or modifications in the mount path on the application Pod or host node synchronize directly with the source files in the OSS Bucket. To prevent accidental data loss, we recommend enabling Versioning for your OSS Bucket.
Application stability risks
Out of Memory (OOM) risk: When performing a
readdiroperation (like thelscommand in a shell script) on a large number of files for the first time (e.g., over 100,000, depending on node memory), ossfs may consume a large amount of memory by loading all metadata at once. This can trigger an Out of Memory (OOM) error, killing the process and making the mount point unavailable.It is recommended to mount a subdirectory of the OSS Bucket or optimize the directory structure to mitigate this risk.
Increased mount time: Configuring
securityContext.fsgroupin your application causes kubelet to recursively change file permissions (chmod/chown) when mounting the volume. If there is a large number of files, this significantly increases mount time and can cause severe Pod startup delays.If you need to configure this parameter and reduce mount time, see Increased mount time for OSS volumes.
Key invalidation risk (AccessKey authentication): If the AccessKey becomes invalid or its permissions change, the application immediately loses access.
To restore access, you must update the credentials in the Secret and restart the application Pod to force a remount, which will cause a service interruption. Perform this operation during a maintenance window. For details, see Solutions.
Cost risks
Part costs:
ossfsuploads files larger than 10 MB in parts. If an upload is unexpectedly interrupted (e.g., due to an application restart), you must manually delete the parts or delete them using lifecycle rules. This prevents incomplete parts from occupying storage space and incurring costs.
Related documentation
You can manage OSS volumes through Container Network File System (CNFS) to improve performance and QoS control. For details, see Manage the lifecycle of OSS volumes.
To protect sensitive data at rest in OSS, we recommend enabling Server-Side Encryption. For details, see Encrypt ossfs 1.0 volumes.
For frequently asked questions about ossfs and OSS, see ossfs 1.0 (default) and ossfs 1.0 volume FAQ.
Enable container storage monitoring and configure alerts to promptly detect volume anomalies or performance bottlenecks.
ossfs 1.0 provides more reliable data consistency for random and concurrent write scenarios than ossfs 2.0. However, ossfs 2.0 offers better performance for sequential read and write scenarios.