If your applications need to store unstructured data, such as images, audio files, and video files, you can mount Object Storage Service (OSS) volumes to your applications as persistent volumes (PVs). This topic describes how to mount a statically provisioned OSS volume to an application and how to verify that the OSS volume can be used to share and persist data.
Background information
OSS is a secure, cost-effective, high-capacity, and highly reliable cloud storage service provided by Alibaba Cloud. OSS is suitable for storing unstructured data that is not frequently modified, such as images, audio files, and video files. For more information, see Storage overview.
OSS volume clients
OSS volumes can be mounted locally as a file system using a client based on Filesystem in Userspace (FUSE). Compared to traditional local storage and block storage, FUSE-based clients have some limitations in terms of POSIX compatibility. ACS supports the following OSS volume clients.
Scenarios | Client | Type | Description |
Most scenarios, such as read/write operations or scenarios that require user permission configuration. | FUSE | Supports most POSIX operations, including append writes, random writes, and user permission settings. | |
Read-only or sequential append-only write scenarios, such as AI training, inference, big data processing, and autonomous driving. | FUSE | Supports full reads and sequential append writes. It is suitable for read-intensive scenarios, such as AI training, inference, big data processing, and autonomous driving, and can significantly improve data read performance. ossfs 2.0 currently supports only GPU computing power. To use CPU computing power, submit a ticket to apply. |
If you are unsure about the read and write model of your application, use ossfs 1.0. ossfs 1.0 offers better POSIX compatibility and ensures stable application operations.
For scenarios where read and write operations can be separated, such as when read and write operations are not performed at the same time or are performed on different files (for example, for breakpoint saving or persistent log saving), you can use different volumes. For example, you can use an ossfs 2.0 volume to mount a read-only path and an ossfs 1.0 volume to mount a write path.
POSIX API support
The following table describes the support for common POSIX APIs that are provided by ossfs 1.0 and ossfs 2.0.
POSIX API support
Performance benchmarks
ossfs 2.0 provides significant performance improvements over ossfs 1.0 in sequential reads and writes and in concurrent reads of small files.
Sequential write performance: In single-threaded sequential write scenarios for large files, ossfs 2.0 increases bandwidth by nearly 18 times compared to ossfs 1.0.
Sequential read performance
In single-threaded sequential read scenarios for large files, ossfs 2.0 increases bandwidth by about 8.5 times compared to ossfs 1.0.
In multi-threaded (4 threads) sequential read scenarios for large files, ossfs 2.0 increases bandwidth by more than 5 times compared to ossfs 1.0.
Concurrent small file read performance: In high-concurrency (128 threads) scenarios for reading small files, ossfs 2.0 increases bandwidth by more than 280 times compared to ossfs 1.0.
If the read and write performance, such as latency and throughput, does not meet your requirements, see Best practices for optimizing the performance of OSS volumes.
Prerequisites
The managed-csiprovisioner component is installed in the ACS cluster.
Go to the ACS cluster management page in the ACS console. In the left-side navigation pane of the cluster management page, choose . On the Storage tab, you can check whether managed-csiprovisioner is installed.
Usage notes
The following notes apply mainly to general read and write scenarios (ossfs 1.0). They are generally not applicable to the ossfs 2.0 client because it supports only some POSIX operations (mainly read operations).
ACS supports only statically provisioned OSS volumes. Dynamically provisioned OSS volumes are not supported.
Random or append writes involve creating a new file locally and re-uploading it to the OSS server. Because of the storage characteristics of OSS, note the following:
The rename operation for files and folders is not atomic.
Avoid concurrent writes or performing operations such as compression and decompression directly in the mount path.
ImportantIn multi-write scenarios, you must coordinate the behavior of each client. ACS does not guarantee data consistency for metadata or data issues caused by write operations.
In addition, note the following limitations:
Hard links are not supported.
You cannot mount buckets with a StorageClass of Archive Storage, Cold Archive, or Deep Cold Archive.
For ossfs 1.0 volumes, the readdir operation sends many headObject requests by default to obtain extended information about all objects in the path. When the destination path contains many files, the overall performance of ossfs may be affected. If file permissions and other properties are not critical in your read and write scenarios, you can enable the
-o readdir_optimizeparameter to optimize performance. For more information, see New readdir optimization feature.
Create an OSS bucket and obtain the bucket information
Create an OSS bucket.
Log on to the OSS console. In the navigation pane on the left, click Buckets.
Click Create Bucket.
In the panel that appears, configure the parameters for the OSS bucket and click Create.
The following table describes the key parameters. For more information, see Create buckets.
Parameter
Description
Bucket Name
Enter a custom name. The name must be globally unique within OSS and cannot be changed after the bucket is created. For more information about the format requirements, see the on-screen instructions.
Region
Select Region-specific and select the region where the ACS cluster resides. This allows pods in the ACS cluster to access the OSS bucket over the internal network.
(Optional) To mount a subdirectory of the OSS bucket, you can create the subdirectory.
On the Buckets page, click the name of the destination bucket.
In the navigation pane on the left of the bucket details page, choose .
Click Create Directory to create the required directories in the OSS bucket.
Obtain the endpoint of the OSS bucket.
On the Buckets page, click the name of the destination bucket.
On the bucket details page, click the Overview tab. In the Port section, copy the endpoint.
If the OSS bucket and the ACS cluster are in the same region, copy the VPC endpoint.
If the bucket is region-agnostic or is in a different region from the ACS cluster, copy the public endpoint.
Obtain an AccessKey ID and an AccessKey secret to authorize access to OSS. For more information, see Obtain an AccessKey pair.
NoteTo mount an OSS bucket that belongs to another Alibaba Cloud account, you must obtain the AccessKey pair from that account.
Mount an OSS volume
ossfs 1.0 volumes
kubectl
Step 1: Create a PV
Save the following YAML content as oss-pv.yaml.
apiVersion: v1 kind: Secret metadata: name: oss-secret namespace: default stringData: akId: <your AccessKey ID> akSecret: <your AccessKey Secret> --- apiVersion: v1 kind: PersistentVolume metadata: name: oss-pv labels: alicloud-pvname: oss-pv spec: storageClassName: test capacity: storage: 20Gi accessModes: - ReadWriteMany persistentVolumeReclaimPolicy: Retain csi: driver: ossplugin.csi.alibabacloud.com volumeHandle: oss-pv nodePublishSecretRef: name: oss-secret namespace: default volumeAttributes: bucket: "<your OSS Bucket Name>" url: "<your OSS Bucket Endpoint>" otherOpts: "-o umask=022 -o allow_other"NoteThe preceding YAML creates a secret and a PV. The secret stores the AccessKey pair to be securely used by the PV. Replace the value of
akIdwith your AccessKey ID and the value ofakSecretwith your AccessKey secret.The following table describes the PV parameters.
Parameter
Description
alicloud-pvnameThe label of the PV. This is used to bind a PVC.
storageClassNameThis configuration is used only to bind a PVC. You do not need to associate an actual StorageClass.
storageThe storage capacity of the OSS volume.
NoteThe capacity of a statically provisioned OSS volume is for declaration purposes only. The actual capacity is not limited. The available capacity is subject to the amount displayed in the OSS console.
accessModesThe access mode.
persistentVolumeReclaimPolicyThe reclaim policy.
driverThe driver type. Here, it is set to
ossplugin.csi.alibabacloud.com, which indicates that the Alibaba Cloud OSS CSI plug-in is used.volumeHandleThe unique identifier of the PV. It must be consistent with
metadata.name.nodePublishSecretRefObtains the AccessKey pair from the specified secret for authorization.
bucketThe name of the OSS bucket. Replace the value of
bucketwith the actual name of your OSS bucket.urlThe endpoint of the OSS bucket. Replace the value of
urlwith the actual endpoint of your OSS bucket.If the OSS bucket and the ACS cluster are in the same region, use the VPC endpoint. For example,
oss-cn-shanghai-internal.aliyuncs.com.If the bucket is region-agnostic or is in a different region from the ACS cluster, use the public endpoint. For example,
oss-cn-shanghai.aliyuncs.com.
otherOptsEnter custom parameters for the OSS volume in the format
-o *** -o ***, such as-o umask=022 -o max_stat_cache_size=100000 -o allow_other.Run the following command to create the secret and PV:
kubectl create -f oss-pv.yamlCheck the status of the PV.
kubectl get pvExpected output:
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS VOLUMEATTRIBUTESCLASS REASON AGE oss-pv 20Gi RWX Retain Available test <unset> 9s
Step 2: Create a PVC
Save the following YAML content as oss-pvc.yaml.
apiVersion: v1 kind: PersistentVolumeClaim metadata: name: oss-pvc spec: storageClassName: test accessModes: - ReadWriteMany resources: requests: storage: 20Gi selector: matchLabels: alicloud-pvname: oss-pvThe following table describes the parameters.
Parameter
Description
storageClassNameThis configuration is used only to bind a PV. You do not need to associate an actual StorageClass. It must be consistent with the
spec.storageClassNameof the PV.accessModesThe access mode.
storageThe storage capacity allocated to the pod. It cannot be higher than the capacity of the OSS volume.
alicloud-pvnameThe label of the PV to bind. It must be consistent with the
metadata.labels.alicloud-pvnameof the PV.You can create a PVC.
kubectl create -f oss-pvc.yamlYou can check the PVC.
kubectl get pvcThe following output confirms that the PVC is bound to the PV that you created in Step 1.
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS VOLUMEATTRIBUTESCLASS AGE oss-pvc Bound oss-pv 20Gi RWX test <unset> 6s
Step 3: Create an application and mount the OSS volume
Save the following YAML content as oss-test.yaml.
The following YAML example creates a deployment with two pods. Both pods request storage resources using the PVC named
oss-pvc. The mount path for both is/data.apiVersion: apps/v1 kind: Deployment metadata: name: oss-test labels: app: nginx spec: replicas: 2 selector: matchLabels: app: nginx template: metadata: labels: app: nginx spec: containers: - name: nginx image: registry.cn-hangzhou.aliyuncs.com/acs-sample/nginx:latest ports: - containerPort: 80 volumeMounts: - name: pvc-oss mountPath: /data volumes: - name: pvc-oss persistentVolumeClaim: claimName: oss-pvcRun the following command to create the deployment and mount the OSS volume:
kubectl create -f oss-test.yamlCheck the status of the pods in the deployment.
kubectl get pod | grep oss-testThe following example output shows that two pods are created.
oss-test-****-***a 1/1 Running 0 28s oss-test-****-***b 1/1 Running 0 28sView the mount path.
The following command is an example. By default, the directory is empty and no output is returned.
kubectl exec oss-test-****-***a -- ls /data
Console
Step 1: Create a PV
Log on to the ACS console.
On the Clusters, click the name of the cluster to go to the cluster management page.
In the left-side navigation pane of the cluster management page, choose .
On the Persistent Volumes page, click Create.
In the Create Persistent Volume dialog box, configure the parameters and click Create.
Parameter
Description
Example
PV Type
Select OSS.
OSS
Name
Enter a custom name for the PV. For more information about the format requirements, see the on-screen instructions.
oss-pv
Capacity
The storage capacity of the OSS volume.
NoteThe capacity of a statically provisioned OSS volume is for declaration purposes only. The actual capacity is not limited. The available capacity is subject to the amount displayed in the OSS console.
20 Gi
Access Mode
Select one of the following options as needed:
ReadOnlyMany: The volume can be mounted by multiple pods in read-only mode.
ReadWriteMany: The volume can be mounted by multiple pods in read/write mode.
ReadWriteMany
Access Certificate
To ensure security, save the AccessKey information in a secret. This topic uses Create Secret as an example.
Create Secret
Namespace: default
Name: oss-secret
AccessKey ID: ********
AccessKey Secret: ********
Bucket ID
Select an OSS bucket.
oss-acs-***
OSS Path
The directory to mount. The root directory (
/) is mounted by default. You can mount a subdirectory (such as/dir) as needed. Make sure the subdirectory already exists./
Endpoint
The endpoint of the OSS bucket.
If the OSS bucket and the ACS cluster are in the same region, select Internal Endpoint.
If the bucket is region-agnostic or is in a different region from the ACS cluster, select Public Endpoint.
Private Domain Name
After the PV is created, you can view its information on the Persistent Volumes page. The PV is not yet bound to a PVC.
Step 2: Create a PVC
In the left-side navigation pane of the cluster management page, choose .
On the Persistent Volume Claims page, click Create.
In the Create Persistent Volume Claim dialog box, configure the parameters and click Create.
Parameter
Description
Example
PVC Type
Select OSS.
OSS
Name
Enter a custom name for the PVC. For more information about the format requirements, see the on-screen instructions.
oss-pvc
Allocation Mode
Select Existing Volume.
Existing Volume
Existing Volume
Select the PV that you created earlier.
oss-pv
Total
The storage capacity allocated to the pod. It cannot be higher than the capacity of the OSS volume.
20 Gi
After the PVC is created, you can view its details on the Persistent Volume Claims page. The PVC is bound to the corresponding Persistent Volume (PV), which is the OSS volume.
Step 3: Create an application and mount the OSS volume
In the left-side navigation pane of the cluster management page, choose .
On the Stateless page, click Create From Image.
Configure the parameters for the deployment and click Create.
The following table describes the key parameters. Keep the default values for other parameters. For more information, see Create a stateless application from a Deployment.
Configuration page
Parameter
Description
Example
Basic Information
Application Name
Enter a custom name for the deployment. For more information about the format requirements, see the on-screen instructions.
oss-test
Number Of Replicas
Configure the number of replicas for the deployment.
2
Container Configuration
Image Name
Enter the address of the image used to deploy the application.
registry.cn-hangzhou.aliyuncs.com/acs-sample/nginx:latest
Required Resources
Set the required vCPU and memory resources.
0.25 vCPU, 0.5 GiB
Volume
Click Add Cloud Storage Claim and configure the parameters.
Mount Source: Select the PVC that you created earlier.
Container Path: Enter the container path to which you want to mount the OSS bucket.
Mount Source: oss-pvc
Container Path: /data
Check the status of the application deployment.
On the Stateless page, click the application name.
On the Pods tab, confirm that the pods are in the Running state.
ossfs 2.0 volumes
You can mount statically provisioned ossfs 2.0 volumes only using kubectl. This operation is not supported in the ACS console.
Step 1: Create a PV
Save the following YAML content as oss-pv.yaml.
apiVersion: v1 kind: Secret metadata: name: oss-secret namespace: default stringData: akId: <your AccessKey ID> akSecret: <your AccessKey Secret> --- apiVersion: v1 kind: PersistentVolume metadata: name: oss-pv labels: alicloud-pvname: oss-pv spec: storageClassName: test capacity: storage: 20Gi accessModes: - ReadWriteMany persistentVolumeReclaimPolicy: Retain csi: driver: ossplugin.csi.alibabacloud.com volumeHandle: oss-pv nodePublishSecretRef: name: oss-secret namespace: default volumeAttributes: fuseType: ossfs2 # Explicitly declares the use of the ossfs 2.0 client bucket: "<your OSS Bucket Name>" url: "<your OSS Bucket Endpoint>" otherOpts: "-o close_to_open=false" # Note: The supported mount parameters are not compatible with the ossfs 1.0 client.NoteThe preceding YAML creates a secret and a PV. The secret stores the AccessKey pair to be securely used by the PV. Replace the value of
akIdwith your AccessKey ID and the value ofakSecretwith your AccessKey secret.The following table describes the PV parameters.
Parameter
Description
alicloud-pvnameThe label of the PV. This is used to bind a PVC.
storageClassNameThis configuration is used only to bind a PVC. You do not need to associate an actual StorageClass.
storageThe storage capacity of the OSS volume.
NoteThe capacity of a statically provisioned OSS volume is for declaration purposes only. The actual capacity is not limited. The available capacity is subject to the amount displayed in the OSS console.
accessModesThe access mode.
persistentVolumeReclaimPolicyThe reclaim policy.
driverThe driver type. Here, it is set to
ossplugin.csi.alibabacloud.com, which indicates that the Alibaba Cloud OSS CSI plug-in is used.volumeHandleThe unique identifier of the PV. It must be consistent with
metadata.name.nodePublishSecretRefObtains the AccessKey pair from the specified secret for authorization.
fuseTypeSpecifies the client to use for the mount. Must be set to
ossfs2to use the ossfs 2.0 client.bucketThe name of the OSS bucket. Replace the value of
bucketwith the actual name of your OSS bucket.urlThe endpoint of the OSS bucket. Replace the value of
urlwith the actual endpoint of your OSS bucket.If the OSS bucket and the ACS cluster are in the same region, use the VPC endpoint. For example,
oss-cn-shanghai-internal.aliyuncs.com.If the bucket is region-agnostic or is in a different region from the ACS cluster, use the public endpoint. For example,
oss-cn-shanghai.aliyuncs.com.
otherOptsEnter custom parameters for the OSS volume in the
-o *** -o ***format. For example,-o close_to_open=false.close_to_open: Disabled by default. If enabled, the system sends a GetObjectMeta request to OSS each time a file is opened to obtain the latest metadata of the file in OSS. This ensures real-time metadata. However, in scenarios that require reading many small files, frequent metadata queries significantly increase access latency.For more information about optional parameters, see ossfs 2.0 mount options.
Run the following command to create the secret and PV:
kubectl create -f oss-pv.yamlCheck the status of the PV.
kubectl get pvExpected output:
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS VOLUMEATTRIBUTESCLASS REASON AGE oss-pv 20Gi RWX Retain Available test <unset> 9s
Step 2: Create a PVC
Save the following YAML content as oss-pvc.yaml.
apiVersion: v1 kind: PersistentVolumeClaim metadata: name: oss-pvc spec: storageClassName: test accessModes: - ReadWriteMany resources: requests: storage: 20Gi selector: matchLabels: alicloud-pvname: oss-pvThe parameters are described in the following table.
Parameter
Description
storageClassNameThis configuration is used only to bind a PV. You do not need to associate an actual StorageClass. It must be consistent with the
spec.storageClassNameof the PV.accessModesThe access mode.
storageThe storage capacity allocated to the pod. It cannot be higher than the capacity of the OSS volume.
alicloud-pvnameThe label of the PV to bind. It must be consistent with the
metadata.labels.alicloud-pvnameof the PV.You can create a PVC.
kubectl create -f oss-pvc.yamlCheck the status of the PVC.
kubectl get pvcThe following output indicates that the PVC is bound to the PV that you created in Step 1.
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS VOLUMEATTRIBUTESCLASS AGE oss-pvc Bound oss-pv 20Gi RWX test <unset> 6s
Step 3: Create an application and mount the OSS volume
Save the following YAML content as oss-test.yaml.
The following YAML example creates a deployment with two pods. Both pods request storage resources using the PVC named
oss-pvc. The mount path for both is/data.apiVersion: apps/v1 kind: Deployment metadata: name: oss-test labels: app: nginx spec: replicas: 2 selector: matchLabels: app: nginx template: metadata: labels: app: nginx spec: containers: - name: nginx image: registry.cn-hangzhou.aliyuncs.com/acs-sample/nginx:latest ports: - containerPort: 80 volumeMounts: - name: pvc-oss mountPath: /data volumes: - name: pvc-oss persistentVolumeClaim: claimName: oss-pvcRun the following command to create the deployment and mount the OSS volume:
kubectl create -f oss-test.yamlCheck the status of the pods in the deployment.
kubectl get pod | grep oss-testThe following example output shows that two pods are created.
oss-test-****-***a 1/1 Running 0 28s oss-test-****-***b 1/1 Running 0 28sView the mount path.
The following command is an example. By default, the directory is empty and no output is returned.
kubectl exec oss-test-****-***a -- ls /data
Verify that the OSS volume can share and persist data
The deployment that you created provisions two pods. The same OSS bucket is mounted to both pods. You can use the following methods to verify that the OSS volume can be used to share and persist data:
Create a file in one pod and check whether the file can be accessed from the other pod. If the file can be accessed, data sharing is enabled.
Recreate the deployment. Access the OSS volume from a new pod to check whether the original data still exists in the OSS bucket. If the data still exists, data persistence is enabled.
View the pod information.
kubectl get pod | grep oss-testSample output:
oss-test-****-***a 1/1 Running 0 40s oss-test-****-***b 1/1 Running 0 40sVerify the shared storage.
Create a file in one of the pods.
In this example, the pod named
oss-test-****-***ais used:kubectl exec oss-test-****-***a -- touch /data/test.txtView the file from the other pod.
In this example, the pod named
oss-test-****-***bis used:kubectl exec oss-test-****-***b -- ls /dataThe following output shows that the new file
test.txtis shared.test.txt
Verify that data persists after the pods are recreated.
Delete and then recreate the deployment.
kubectl rollout restart deploy oss-testView the pods and wait for the new pods to be created.
kubectl get pod | grep oss-testSample output:
oss-test-****-***c 1/1 Running 0 67s oss-test-****-***d 1/1 Running 0 49sFrom a new pod, check whether the data in the file system still exists.
In this example, the pod named
oss-test-c***is used:kubectl exec oss-test-****-***c -- ls /dataThe following output shows that the data in the OSS bucket still exists and can be retrieved from the mount directory in the new pod.
test.txt