All Products
Search
Document Center

Container Service for Kubernetes:Mount a static OSS persistent volume by using ossfs 2.0 in an ACK cluster

Last Updated:Mar 26, 2026

Mount an Object Storage Service (OSS) bucket as a statically provisioned Persistent Volume (PV) to give your Pods persistent, shared storage with a POSIX file system interface. ossfs 2.0 is optimized for sequential reads and high-bandwidth workloads, making it a good fit for AI training, big data analytics, and static content serving.

For performance benchmarks, see ossfs 2.0 client performance benchmarks.

How it works

Mounting an OSS bucket as a statically provisioned volume in a Container Service for Kubernetes (ACK) cluster involves four steps:

  1. Choose an authentication method. Use RAM Roles for Service Accounts (RRSA) for production—it provides temporary, auto-rotating credentials and Pod-level permission isolation. Use an AccessKey for testing only, as it relies on a long-term static key.

  2. Create a PV. Define a PV that registers your existing OSS bucket with the cluster, specifying the bucket name, endpoint, subdirectory, and authentication details.

  3. Create a PVC. Create a Persistent Volume Claim (PVC) that binds to the PV you defined.

  4. Deploy an application. Reference the PVC in your workload manifest to mount the OSS bucket into the container.

Usage notes

  • Supported workloads: ossfs 2.0 supports read-only and sequential-append write workloads. For random or concurrent writes, data consistency cannot be guaranteed—use ossfs 1.0 instead.

  • Data safety: Changes made to files at the mount point—whether from inside the Pod or on the host node—are immediately synced to the OSS bucket. Enable versioning on the bucket to protect against accidental deletion.

  • Health checks: Configure a liveness probe on Pods that use OSS volumes to verify that the mount point is accessible. Kubernetes automatically restarts a Pod if the probe fails, triggering a remount.

  • Multipart uploads: ossfs automatically uses multipart upload for files larger than 10 MB. If an upload is interrupted, incomplete parts remain in the bucket. Delete these parts manually or configure a lifecycle rule to clean them up automatically.

Method 1: Authenticate using RRSA (recommended)

RRSA authenticates Pods using temporary, auto-rotating credentials through OpenID Connect (OIDC) and Security Token Service (STS), and supports PV-level permission isolation. For background, see Use RRSA to authorize different pods to access different cloud services.

Prerequisites

Before you begin, make sure you have:

Step 1: Create a RAM role

Skip this step if you have already mounted an OSS volume in the cluster using RRSA.

  1. Enable the RRSA feature in the ACK console.

  2. Create a RAM role for an OIDC identity provider. The following table lists the key parameters for the sample role demo-role-for-rrsa.

    Parameter Value
    Identity provider type Select OIDC.
    Identity provider Select the provider associated with your cluster, for example, ack-rrsa-<cluster_id>.
    oidc:iss Keep the default value.
    oidc:aud Keep the default value.
    oidc:sub Add a condition: Key = oidc:sub, Operator = StringEquals, Value = system:serviceaccount:ack-csi-fuse:csi-fuse-ossfs. ack-csi-fuse is the namespace where the ossfs client runs and cannot be changed. csi-fuse-ossfs is the ServiceAccount name and can be customized.
    Role name demo-role-for-rrsa
    To change the ServiceAccount name, see FAQ about ossfs 2.0 volumes.

Step 2: Grant permissions to the RAM role

  1. Create a custom policy to grant OSS access. For details, see Create custom policies. Replace mybucket with your actual bucket name.

    • Read-only policy

      {
          "Statement": [
              {
                  "Action": [
                      "oss:Get*",
                      "oss:List*"
                  ],
                  "Effect": "Allow",
                  "Resource": [
                      "acs:oss:*:*:mybucket",
                      "acs:oss:*:*:mybucket/*"
                  ]
              }
          ],
          "Version": "1"
      }
    • Read-write policy

      {
          "Statement": [
              {
                  "Action": "oss:*",
                  "Effect": "Allow",
                  "Resource": [
                      "acs:oss:*:*:mybucket",
                      "acs:oss:*:*:mybucket/*"
                  ]
              }
          ],
          "Version": "1"
      }
  2. (Optional) If objects in the bucket are encrypted with a customer master key (CMK) in Key Management Service (KMS), grant KMS access permissions. See Encryption for details.

  3. Attach the policy to the demo-role-for-rrsa role. See Grant permissions to a RAM role.

    To use an existing RAM role that already has OSS access, modify its trust policy instead. See Use an existing RAM role.

Step 3: Create a PV

  1. Create ossfs2-pv.yaml with the following content.

    The following PV mounts the OSS bucket cnfs-oss-test as a 20 GiB read-only file system.
    apiVersion: v1
    kind: PersistentVolume
    metadata:
      name: pv-ossfs2                          # PV name
    spec:
      capacity:
        storage: 20Gi                          # Used for PVC matching only; does not cap OSS capacity
      accessModes:
        - ReadOnlyMany
      persistentVolumeReclaimPolicy: Retain
      csi:
        driver: ossplugin.csi.alibabacloud.com
        volumeHandle: pv-ossfs2                # Must match metadata.name exactly
        volumeAttributes:
          fuseType: ossfs2                     # Required: specifies the ossfs 2.0 client
          bucket: cnfs-oss-test                # OSS bucket name
          path: /subpath                       # Subdirectory to mount; leave blank to mount the root
          url: oss-cn-hangzhou-internal.aliyuncs.com  # Internal endpoint (same region); use public endpoint for cross-region
          otherOpts: "-o close_to_open=false"  # false (default): cache metadata for better small-file read performance
                                               # true: fetch fresh metadata on every file open (higher latency, useful when another system frequently updates objects)
          authType: "rrsa"                     # Authentication method
          roleName: "demo-role-for-rrsa"       # RAM role created in Step 1

    Key parameters in volumeAttributes:

    Parameter Required Description
    fuseType Yes Must be ossfs2 to use the ossfs 2.0 client.
    bucket Yes Name of the OSS bucket to mount.
    path No Subdirectory within the bucket. Defaults to the root if left blank.
    url Yes OSS endpoint. Use an internal endpoint when the cluster and bucket are in the same region (or connected via Virtual Private Cloud (VPC)); use a public endpoint for cross-region access. Internal format: http(s)://oss-{region}-internal.aliyuncs.com. Public format: http(s)://oss-{region}.aliyuncs.com. The vpc100-oss-{region}.aliyuncs.com internal endpoint format is deprecated—switch to the new format.
    otherOpts No Additional mount options in the format -o <option> -o <option>. For all available options, see ossfs 2.0 mount options.
    authType Yes Set to rrsa.
    roleName Yes Name of the RAM role. To assign different permissions to different PVs, create a separate RAM role for each and reference it here.
    To use specified ARNs or a custom ServiceAccount with RRSA, see How do I use specified ARNs or a ServiceAccount with the RRSA authentication method?
  2. Apply the manifest.

    kubectl create -f ossfs2-pv.yaml
  3. Verify the PV is available.

    kubectl get pv pv-ossfs2

    Expected output:

    NAME        CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM   STORAGECLASS   VOLUMEATTRIBUTESCLASS   REASON   AGE
    pv-ossfs2   20Gi       ROX            Retain           Available                          <unset>                          15s

Step 4: Create a PVC

  1. Create ossfs2-pvc-static.yaml with the following content.

    kind: PersistentVolumeClaim
    apiVersion: v1
    metadata:
      name: pvc-ossfs2       # PVC name
      namespace: default
    spec:
      accessModes:
        - ReadOnlyMany        # Must match the PV
      resources:
        requests:
          storage: 20Gi       # Must match the PV
      volumeName: pv-ossfs2   # Bind to this specific PV
  2. Create the PVC.

    kubectl create -f ossfs2-pvc-static.yaml
  3. Verify the PVC is bound.

    kubectl get pvc pvc-ossfs2

    Expected output:

    NAME         STATUS   VOLUME      CAPACITY   ACCESS MODES   STORAGECLASS   VOLUMEATTRIBUTESCLASS   AGE
    pvc-ossfs2   Bound    pv-ossfs2   20Gi       ROX                           <unset>                 6s

Step 5: Deploy an application

  1. Create ossfs2-test.yaml to define a StatefulSet that mounts the PVC at /data.

    apiVersion: apps/v1
    kind: StatefulSet
    metadata:
      name: ossfs2-test
      namespace: default
    spec:
      replicas: 1
      selector:
        matchLabels:
          app: ossfs2-test
      template:
        metadata:
          labels:
            app: ossfs2-test
        spec:
          containers:
          - name: nginx
            image: anolis-registry.cn-zhangjiakou.cr.aliyuncs.com/openanolis/nginx:1.14.1-8.6
            ports:
            - containerPort: 80
            volumeMounts:
            - name: pvc-ossfs2
              mountPath: /data
          volumes:
            - name: pvc-ossfs2
              persistentVolumeClaim:
                claimName: pvc-ossfs2
  2. Deploy the application.

    kubectl create -f ossfs2-test.yaml
  3. Wait for the Pod to be running.

    kubectl get pod -l app=ossfs2-test

    Expected output:

    NAME            READY   STATUS    RESTARTS   AGE
    ossfs2-test-0   1/1     Running   0          12m
  4. Verify the OSS bucket is mounted.

    kubectl exec -it ossfs2-test-0 -- ls /data

    The output should show the data in the OSS mount path.

Method 2: Authenticate using an AccessKey

Store an AccessKey pair in a Kubernetes Secret and reference it from the PV. This approach is simple to configure but uses a long-term static key.

Important

If the AccessKey is revoked or its permissions change, all Pods using the volume immediately lose access. To restore access, update the Secret with new credentials and restart the affected Pods—this causes a brief service interruption. For production environments, use Method 1: Authenticate using RRSA to avoid this operational overhead.

Prerequisites

Before you begin, make sure you have:

Step 1: Create a RAM user and store the AccessKey

  1. Create a RAM user. If you already have one, skip this step. See Create a RAM user.

  2. Create a custom policy to grant OSS access. See Create custom policies. Replace mybucket with your actual bucket name.

    • Read-only policy

      {
          "Statement": [
              {
                  "Action": [
                      "oss:Get*",
                      "oss:List*"
                  ],
                  "Effect": "Allow",
                  "Resource": [
                      "acs:oss:*:*:mybucket",
                      "acs:oss:*:*:mybucket/*"
                  ]
              }
          ],
          "Version": "1"
      }
    • Read-write policy

      {
          "Statement": [
              {
                  "Action": "oss:*",
                  "Effect": "Allow",
                  "Resource": [
                      "acs:oss:*:*:mybucket",
                      "acs:oss:*:*:mybucket/*"
                  ]
              }
          ],
          "Version": "1"
      }
  3. (Optional) If objects in the bucket are encrypted with a CMK in KMS, grant KMS access. See Encryption.

  4. Attach the policy to the RAM user. See Grant permissions to a RAM user.

  5. Create an AccessKey pair for the RAM user. See Create an AccessKey pair.

  6. Store the AccessKey pair as a Kubernetes Secret. Replace xxxxxx with the actual values.

    kubectl create -n default secret generic oss-secret \
      --from-literal='akId=xxxxxx' \
      --from-literal='akSecret=xxxxxx'

Step 2: Create a PV

  1. Create ossfs2-pv-ak.yaml with the following content.

    The following PV mounts the OSS bucket cnfs-oss-test as a 20 GiB read-only file system.
    apiVersion: v1
    kind: PersistentVolume
    metadata:
      name: pv-ossfs2                          # PV name
    spec:
      capacity:
        storage: 20Gi                          # Used for PVC matching only
      accessModes:
        - ReadOnlyMany
      persistentVolumeReclaimPolicy: Retain
      csi:
        driver: ossplugin.csi.alibabacloud.com
        volumeHandle: pv-ossfs2                # Must match metadata.name exactly
        nodePublishSecretRef:
          name: oss-secret                     # Secret created in Step 1
          namespace: default
        volumeAttributes:
          fuseType: ossfs2                     # Required: specifies the ossfs 2.0 client
          bucket: cnfs-oss-test                # OSS bucket name
          path: /subpath                       # Subdirectory to mount; leave blank to mount the root
          url: oss-cn-hangzhou-internal.aliyuncs.com  # Internal endpoint (same region); use public endpoint for cross-region
          otherOpts: "-o close_to_open=false"  # false (default): cache metadata for better small-file read performance
                                               # true: fetch fresh metadata on every file open (higher latency)

    Parameters in nodePublishSecretRef:

    Parameter Required Description
    name Yes Name of the Secret that stores the AccessKey pair.
    namespace Yes Namespace where the Secret is located.

    Parameters in volumeAttributes: same as Method 1, Step 3, except authType and roleName are not used.

  2. Apply the manifest.

    kubectl create -f ossfs2-pv-ak.yaml
  3. Verify the PV is available.

    kubectl get pv pv-ossfs2

    Expected output:

    NAME        CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM   STORAGECLASS   VOLUMEATTRIBUTESCLASS   REASON   AGE
    pv-ossfs2   20Gi       ROX            Retain           Available                          <unset>                          15s

Step 3: Create a PVC

  1. Create ossfs2-pvc-static.yaml.

    kind: PersistentVolumeClaim
    apiVersion: v1
    metadata:
      name: pvc-ossfs2
      namespace: default
    spec:
      accessModes:
        - ReadOnlyMany
      resources:
        requests:
          storage: 20Gi
      volumeName: pv-ossfs2
  2. Create the PVC.

    kubectl create -f ossfs2-pvc-static.yaml
  3. Verify the PVC is bound.

    kubectl get pvc pvc-ossfs2

    Expected output:

    NAME         STATUS   VOLUME      CAPACITY   ACCESS MODES   STORAGECLASS   VOLUMEATTRIBUTESCLASS   AGE
    pvc-ossfs2   Bound    pv-ossfs2   20Gi       ROX                           <unset>                 6s

Step 4: Deploy an application

Follow the same steps as Method 1, Step 5.

Apply in production

Category Recommendation
Security Use RRSA for all production workloads. It provides temporary, auto-rotating credentials via OIDC and STS, and enables fine-grained, Pod-level permission isolation.
Least privilege Grant only the permissions the application needs—read-only or read-write—scoped to the specific bucket.
Endpoint Use an internal endpoint when the cluster and bucket are in the same region to avoid public data transfer costs and reduce latency.
Mount options Use -o close_to_open=false (default) to cache metadata and reduce latency for small-file reads. Switch to -o close_to_open=true only when Pods need to see updates from another writer immediately.
Workload fit ossfs 2.0 is well-suited for AI training, inference, big data processing, and autonomous driving workloads. It is not suitable for workloads that require random writes, such as databases or collaborative editing tools.
Incomplete uploads Configure a lifecycle rule on the bucket to automatically delete incomplete multipart upload parts to avoid unnecessary storage costs.
Health checks Configure a liveness probe on each Pod to check mount point availability. If the mount fails, Kubernetes restarts the Pod and triggers a remount.
Monitoring Use container storage monitoring to track volume performance and set up alerts to catch issues early.

FAQ

References