All Products
Search
Document Center

Container Compute Service:Mount a statically provisioned OSS volume

Last Updated:Mar 26, 2026

If your applications need to store unstructured data such as images, audio, and video, mount an Object Storage Service (OSS) bucket as a persistent volume (PV) in your ACS cluster. This topic shows you how to mount a statically provisioned OSS volume using kubectl or the ACS console, and how to verify that the volume supports data sharing and persistence across pods.

Background

OSS is a secure, cost-effective, high-capacity, and highly reliable cloud storage service. It is designed for unstructured data that is not frequently modified. For more information, see Storage overview.

OSS volumes are mounted as a local file system using a Filesystem in Userspace (FUSE) client. Because FUSE-based clients sit between the application and OSS object APIs, they have inherent POSIX compatibility limitations. ACS supports two clients.

Choose a client

Scenario Client Description
General read/write, or scenarios requiring user permission configuration ossfs 1.0 Supports most POSIX operations, including random writes, append writes, and user permission settings
Read-intensive workloads — AI training, inference, big data, autonomous driving ossfs 2.0 Optimized for sequential reads and append writes; significantly higher throughput than ossfs 1.0. Currently supports GPU computing power only. To use CPU computing power, submit a ticket.

When you're unsure, use ossfs 1.0. It provides broader POSIX compatibility and more stable operation across general workloads.

For workloads with separated read and write operations (for example, breakpoint saving or persistent log writing where reads and writes don't overlap), use two volumes: an ossfs 2.0 volume for the read-only path and an ossfs 1.0 volume for the write path.

POSIX compatibility

OSS volumes cannot implement all POSIX semantics efficiently against the underlying object storage API. Operations that would require multiple API round trips (such as atomic rename), or that have no object storage equivalent (such as mutable file permissions), are either limited or unsupported.

The following table summarizes POSIX API support for both clients.

POSIX API support
Category Operation ossfs 1.0 ossfs 2.0
Basic file operations open Supported Supported
flush Supported Supported
close Supported Supported
File reads and writes read Supported Supported
write Supports random writes (requires disk cache) Sequential writes only (no disk cache required)
truncate Supported (adjustable file size) Empties the file only
File metadata operations create Supported Supported
unlink Supported Supported
rename Supported Supported
Directory operations mkdir Supported Supported
readdir Supported Supported
rmdir Supported Supported
Permissions and properties getattr Supported Supported
chmod Supported Accepted without error; setting does not take effect
chown Supported Accepted without error; setting does not take effect
utimes Supported Supported
Extended features setxattr Supported Not supported
symlink Supported Not supported
lock Not supported Not supported

Performance benchmarks

ossfs 2.0 significantly outperforms ossfs 1.0 for sequential and concurrent read workloads.

  • Sequential write (single-threaded, large files): ~18x higher bandwidth

  • Sequential read (single-threaded, large files): ~8.5x higher bandwidth

  • Sequential read (4 threads, large files): >5x higher bandwidth

  • Concurrent small file reads (128 threads): >280x higher bandwidth

If read/write performance doesn't meet your requirements, see Best practices for optimizing the performance of OSS volumes.

Prerequisites

Before you begin, ensure that you have:

  • The managed-csiprovisioner component installed in your ACS cluster. To verify, go to the cluster management page in the ACS console, then choose Operations > Add-ons and check the Storage tab.

Usage notes

The following notes apply to ossfs 1.0 (general read/write scenarios). Most do not apply to ossfs 2.0, which supports a limited subset of POSIX operations.

  • ACS supports only statically provisioned OSS volumes. Dynamically provisioned OSS volumes are not supported.

  • The rename operation for files and directories is not atomic.

  • Avoid concurrent writes, or compression and decompression operations directly in the mount path.

    Important

    In multi-writer scenarios, you must coordinate writes across clients. ACS does not guarantee data consistency for conflicts caused by concurrent writes.

  • Hard links are not supported.

  • Buckets with a StorageClass of Archive Storage, Cold Archive, or Deep Cold Archive cannot be mounted.

  • For ossfs 1.0, readdir sends a headObject request for every object in the path to retrieve extended metadata. When the path contains many files, this can degrade performance. If file permissions are not required in your scenario, enable the -o readdir_optimize parameter. For more information, see New readdir optimization feature.

Step 1: Create an OSS bucket

  1. Log on to the OSS console. In the left navigation pane, click Buckets, then click Create Bucket.

  2. Configure the bucket parameters and click Create. For all parameters, see Create buckets.

    Parameter Description
    Bucket Name Enter a globally unique name. The name cannot be changed after creation.
    Region Select Region-specific and choose the region where your ACS cluster resides. This lets pods access the bucket over the internal network.
  3. (Optional) To mount a subdirectory of the bucket, create it now. On the Buckets page, click the bucket name, then choose Files > Objects and click Create Directory.

  4. Get the bucket endpoint. On the Buckets page, click the bucket name, go to the Overview tab, and copy the endpoint from the Port section.

    • If the bucket and your ACS cluster are in the same region, copy the VPC endpoint.

    • If the bucket is region-agnostic or in a different region, copy the public endpoint.

  5. Get an AccessKey pair to authorize access to OSS. For details, see Obtain an AccessKey pair.

    To mount a bucket that belongs to a different Alibaba Cloud account, use the AccessKey pair from that account.

Step 2: Mount an OSS volume

ossfs 1.0

You can mount an ossfs 1.0 volume using kubectl or the ACS console.

kubectl

Create a PV

  1. Save the following YAML as oss-pv.yaml. Replace <your AccessKey ID>, <your AccessKey Secret>, <your OSS Bucket Name>, and <your OSS Bucket Endpoint> with your actual values.

    • umask=022: Sets file permissions to 755. Resolves permission issues for objects uploaded via the SDK or OSS console (default permission: 640). Recommended for read/write-split or multi-user access scenarios.

    • max_stat_cache_size=100000: Caches up to 100,000 object metadata entries in memory, improving ls and stat performance. Cached metadata may become stale if objects are modified via the OSS console, SDK, or ossutil. Set to 0 to disable caching, or use stat_cache_expire to reduce the expiration time.

    • allow_other: Allows users other than the mounting user to access the mount target. Useful in multi-user shared environments.

    Parameter Description Required Default
    alicloud-pvname Label used to bind a PVC to this PV Yes
    storageClassName Used only to bind a PVC. No actual StorageClass association is required. Must match spec.storageClassName in the PVC. Yes
    storage Declared storage capacity. For statically provisioned OSS volumes, this is for declaration only — the actual available capacity is determined by the OSS console, not this value. Yes
    accessModes Access mode for the volume Yes
    persistentVolumeReclaimPolicy Reclaim policy after the PVC is released No Retain
    driver CSI driver name. Set to ossplugin.csi.alibabacloud.com for the Alibaba Cloud OSS CSI plugin. Yes
    volumeHandle Unique identifier for the PV. Must match metadata.name. Yes
    nodePublishSecretRef References the Secret that stores the AccessKey pair Yes
    bucket OSS bucket name Yes
    url OSS bucket endpoint. Use the VPC endpoint (for example, oss-cn-shanghai-internal.aliyuncs.com) if the bucket and cluster are in the same region; use the public endpoint (for example, oss-cn-shanghai.aliyuncs.com) otherwise. Yes
    otherOpts Additional mount options in -o * -o * format. For example: -o umask=022 -o max_stat_cache_size=100000 -o allow_other. See Options supported by ossfs and ossfs 1.0 configuration best practices. No
    apiVersion: v1
    kind: Secret
    metadata:
      name: oss-secret
      namespace: default
    stringData:
      akId: <your AccessKey ID>
      akSecret: <your AccessKey Secret>
    ---
    apiVersion: v1
    kind: PersistentVolume
    metadata:
      name: oss-pv
      labels:
        alicloud-pvname: oss-pv
    spec:
      storageClassName: test
      capacity:
        storage: 20Gi
      accessModes:
        - ReadWriteMany
      persistentVolumeReclaimPolicy: Retain
      csi:
        driver: ossplugin.csi.alibabacloud.com
        volumeHandle: oss-pv
        nodePublishSecretRef:
          name: oss-secret
          namespace: default
        volumeAttributes:
          bucket: "<your OSS Bucket Name>"
          url: "<your OSS Bucket Endpoint>"
          otherOpts: "-o umask=022 -o allow_other"

    The following table describes the PV parameters. Common otherOpts values:

  2. Create the Secret and PV:

    kubectl create -f oss-pv.yaml
  3. Verify the PV is available:

    kubectl get pv

    Expected output:

    NAME     CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM   STORAGECLASS   VOLUMEATTRIBUTESCLASS   REASON   AGE
    oss-pv   20Gi       RWX            Retain           Available           test           <unset>                          9s

Create a PVC

  1. Save the following YAML as oss-pvc.yaml.

    Parameter Description Required
    storageClassName Must match spec.storageClassName in the PV Yes
    accessModes Access mode. Must match the PV. Yes
    storage Storage capacity to allocate to the pod. Cannot exceed the PV capacity. Yes
    alicloud-pvname Label selector used to bind to the PV. Must match metadata.labels.alicloud-pvname in the PV. Yes
    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
      name: oss-pvc
    spec:
      storageClassName: test
      accessModes:
        - ReadWriteMany
      resources:
        requests:
          storage: 20Gi
      selector:
        matchLabels:
          alicloud-pvname: oss-pv
  2. Create the PVC:

    kubectl create -f oss-pvc.yaml
  3. Verify the PVC is bound to the PV:

    kubectl get pvc

    Expected output:

    NAME      STATUS   VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS   VOLUMEATTRIBUTESCLASS   AGE
    oss-pvc   Bound    oss-pv   20Gi       RWX            test           <unset>                 6s

Create an application and mount the OSS volume

  1. Save the following YAML as oss-test.yaml. This creates a Deployment with two pods, both mounting the OSS bucket at /data.

    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: oss-test
      labels:
        app: nginx
    spec:
      replicas: 2
      selector:
        matchLabels:
          app: nginx
      template:
        metadata:
          labels:
            app: nginx
        spec:
          containers:
          - name: nginx
            image: registry.cn-hangzhou.aliyuncs.com/acs-sample/nginx:latest
            ports:
            - containerPort: 80
            volumeMounts:
              - name: pvc-oss
                mountPath: /data
          volumes:
            - name: pvc-oss
              persistentVolumeClaim:
                claimName: oss-pvc
  2. Create the Deployment:

    kubectl create -f oss-test.yaml
  3. Verify both pods are running:

    kubectl get pod | grep oss-test

    Expected output:

    oss-test-****-***a   1/1     Running   0          28s
    oss-test-****-***b   1/1     Running   0          28s
  4. View the mount path. By default, the directory is empty and no output is returned.

    kubectl exec oss-test-****-***a -- ls /data

ACS console

Create a PV

  1. Log on to the ACS console. On the Clusters page, click the cluster name to go to the cluster management page.

  2. In the left navigation pane, choose Volumes > Persistent Volumes, then click Create.

  3. Configure the parameters and click Create. After creation, the PV appears on the Persistent Volumes page with no PVC bound yet.

    Parameter Description Example
    PV Type Select OSS OSS
    Name Custom name for the PV oss-pv
    Capacity Declared storage capacity. For statically provisioned OSS volumes, the actual available capacity is determined by the OSS console. 20 Gi
    Access Mode ReadOnlyMany: mounted by multiple pods in read-only mode. ReadWriteMany: mounted by multiple pods in read/write mode. ReadWriteMany
    Access Certificate Store the AccessKey pair in a Secret. Select Create Secret and fill in the namespace, name, AccessKey ID, and AccessKey secret. Namespace: default; Name: oss-secret
    Bucket ID Select the OSS bucket to mount oss-acs-***
    OSS Path Directory to mount. Defaults to the root directory (/). To mount a subdirectory (for example, /dir), make sure it exists first. /
    Endpoint Select Internal Endpoint if the bucket and cluster are in the same region; select Public Endpoint otherwise. Internal Endpoint

Create a PVC

  1. In the left navigation pane, choose Volumes > Persistent Volume Claims, then click Create.

  2. Configure the parameters and click Create. After creation, the PVC appears on the Persistent Volume Claims page with status Bound.

    Parameter Description Example
    PVC Type Select OSS OSS
    Name Custom name for the PVC oss-pvc
    Allocation Mode Select Existing Volume Existing Volume
    Existing Volume Select the PV created earlier oss-pv
    Total Storage capacity to allocate to the pod. Cannot exceed the PV capacity. 20 Gi

Create an application and mount the OSS volume

  1. In the left navigation pane, choose Workloads > Deployments, then click Create From Image.

  2. Configure the deployment parameters and click Create. For all parameters, see Create a stateless application from a Deployment.

    Configuration page Parameter Description Example
    Basic Information Application Name Custom name for the Deployment oss-test
    Number Of Replicas Number of pod replicas 2
    Container Configuration Image Name Container image address registry.cn-hangzhou.aliyuncs.com/acs-sample/nginx:latest
    Required Resources vCPU and memory resources 0.25 vCPU, 0.5 GiB
    Volume Add Cloud Storage Claim Click to add a PVC. Set Mount Source to the PVC created earlier and Container Path to the mount path. Mount Source: oss-pvc; Container Path: /data
  3. On the Stateless page, click the application name. On the Pods tab, verify that all pods are in the Running state.

ossfs 2.0

Statically provisioned ossfs 2.0 volumes can only be mounted using kubectl. The ACS console does not support this operation.

Create a PV

  1. Save the following YAML as oss-pv.yaml. Replace <your AccessKey ID>, <your AccessKey Secret>, <your OSS Bucket Name>, and <your OSS Bucket Endpoint> with your actual values.

    Parameter Description Required Default
    fuseType Specifies the FUSE client. Must be set to ossfs2 to use ossfs 2.0. Yes
    otherOpts Mount options in -o * -o * format. The supported options differ from ossfs 1.0 and are not compatible. For example: -o close_to_open=false. See ossfs 2.0 mount options. No
    apiVersion: v1
    kind: Secret
    metadata:
      name: oss-secret
      namespace: default
    stringData:
      akId: <your AccessKey ID>
      akSecret: <your AccessKey Secret>
    ---
    apiVersion: v1
    kind: PersistentVolume
    metadata:
      name: oss-pv
      labels:
        alicloud-pvname: oss-pv
    spec:
      storageClassName: test
      capacity:
        storage: 20Gi
      accessModes:
        - ReadWriteMany
      persistentVolumeReclaimPolicy: Retain
      csi:
        driver: ossplugin.csi.alibabacloud.com
        volumeHandle: oss-pv
        nodePublishSecretRef:
          name: oss-secret
          namespace: default
        volumeAttributes:
          fuseType: ossfs2  # Declares the use of the ossfs 2.0 client
          bucket: "<your OSS Bucket Name>"
          url: "<your OSS Bucket Endpoint>"
          otherOpts: "-o close_to_open=false"  # Mount parameters differ from ossfs 1.0

    The ossfs 2.0 PV shares most parameters with ossfs 1.0. The key differences are: close_to_open: Disabled by default. When enabled, ossfs 2.0 sends a GetObjectMeta request each time a file is opened to fetch the latest metadata from OSS. This ensures up-to-date metadata but increases latency when reading many small files. For all other PV parameters, see the ossfs 1.0 PV parameter table above.

  2. Create the Secret and PV:

    kubectl create -f oss-pv.yaml
  3. Verify the PV is available:

    kubectl get pv

    Expected output:

    NAME     CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM   STORAGECLASS   VOLUMEATTRIBUTESCLASS   REASON   AGE
    oss-pv   20Gi       RWX            Retain           Available           test           <unset>                          9s

Create a PVC

The PVC YAML for ossfs 2.0 is identical to ossfs 1.0. Save the following as oss-pvc.yaml and apply it.

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: oss-pvc
spec:
  storageClassName: test
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 20Gi
  selector:
    matchLabels:
      alicloud-pvname: oss-pv
kubectl create -f oss-pvc.yaml

Verify the PVC is bound:

kubectl get pvc

Expected output:

NAME      STATUS   VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS   VOLUMEATTRIBUTESCLASS   AGE
oss-pvc   Bound    oss-pv   20Gi       RWX            test           <unset>                 6s

Create an application and mount the OSS volume

  1. Save the following YAML as oss-test.yaml and create the Deployment:

    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: oss-test
      labels:
        app: nginx
    spec:
      replicas: 2
      selector:
        matchLabels:
          app: nginx
      template:
        metadata:
          labels:
            app: nginx
        spec:
          containers:
          - name: nginx
            image: registry.cn-hangzhou.aliyuncs.com/acs-sample/nginx:latest
            ports:
            - containerPort: 80
            volumeMounts:
              - name: pvc-oss
                mountPath: /data
          volumes:
            - name: pvc-oss
              persistentVolumeClaim:
                claimName: oss-pvc
    kubectl create -f oss-test.yaml
  2. Verify both pods are running:

    kubectl get pod | grep oss-test

    Expected output:

    oss-test-****-***a   1/1     Running   0          28s
    oss-test-****-***b   1/1     Running   0          28s
  3. View the mount path. By default, the directory is empty and no output is returned.

    kubectl exec oss-test-****-***a -- ls /data

Verify data sharing and persistence

The Deployment provisions two pods that share the same OSS bucket. Use the following steps to confirm the volume supports both data sharing across pods and persistence across pod restarts.

Verify shared storage

  1. Get the pod names:

    kubectl get pod | grep oss-test

    Sample output:

    oss-test-****-***a   1/1     Running   0          40s
    oss-test-****-***b   1/1     Running   0          40s
  2. Write a file from one pod:

    kubectl exec oss-test-****-***a -- touch /data/test.txt
  3. Read the file from the other pod:

    kubectl exec oss-test-****-***b -- ls /data

    Expected output:

    test.txt

    The file written by oss-test-**-*a is visible from oss-test-**-*b, confirming shared storage is working.

Verify data persistence after pod restart

  1. Restart the Deployment:

    kubectl rollout restart deploy oss-test
  2. Wait for the new pods to reach Running status:

    kubectl get pod | grep oss-test

    Sample output:

    oss-test-****-***c   1/1     Running   0          67s
    oss-test-****-***d   1/1     Running   0          49s
  3. Verify the data written earlier still exists in the new pod:

    kubectl exec oss-test-****-***c -- ls /data

    Expected output:

    test.txt

    The file persists after the Deployment restarts, confirming data persistence.

What's next