All Products
Search
Document Center

Container Service for Kubernetes:Mount an OSS Bucket by using an ossfs 1.0 dynamically provisioned volume

Last Updated:Mar 26, 2026

ossfs 1.0 supports dynamic provisioning for OSS storage. By defining a StorageClass, you can trigger automatic creation of a persistent volume (PV) when a PersistentVolumeClaim (PVC) is created — no manual PV configuration required. This works well for multi-tenant clusters and workloads that need on-demand shared storage.

Before you begin

Before you begin, ensure that you have:

  • A running ACK cluster with the Container Storage Interface (CSI) components (csi-plugin and csi-provisioner) installed

  • An OSS bucket in the same Alibaba Cloud account as your cluster

Version requirements:

Authentication method Cluster version CSI version
RRSA (RAM Roles for Service Accounts) 1.26 or later v1.30.4 or later
AccessKey Any v1.18.8.45 or later (recommended for stable mounts)

To upgrade your cluster, see Manually upgrade a cluster. To upgrade CSI components, see Upgrade CSI components.

Limitations

Review these constraints before proceeding:

  • Concurrent write consistency: OSS uses an "overwrite upload" model. Even with CSI v1.28 or later (which improves write stability), concurrent single-file writes can still cause data to be overwritten. Enforce consistency at the application layer.

  • Data sync on deletion: Deleting or modifying files in the mounted path immediately syncs to the OSS bucket. Enable versioning on the bucket to protect against accidental loss.

  • Out of Memory (OOM) risk: Running readdir (or ls in a shell script) on more than ~100,000 files loads all object metadata at once, which can exhaust node memory and kill the ossfs process. Mount a subdirectory rather than the full bucket root when working with large object counts.

  • Slow Pod startup with `fsGroup`: Setting securityContext.fsGroup causes kubelet to recursively chmod/chown all files at mount time. For buckets with many objects, this significantly delays Pod startup. See Increased mount time for OSS volumes for mitigation options.

  • AccessKey invalidation: If an AccessKey expires or its permissions change, the application immediately loses access. Restoring access requires updating the Secret and restarting all affected Pods, causing a service interruption.

  • Multipart upload costs: ossfs splits files larger than 10 MB into parts. If an upload is interrupted (for example, due to a Pod restart), incomplete parts remain in the bucket and incur storage charges. Clean them up manually or configure lifecycle rules to delete incomplete parts automatically.

  • Reclaim policy: OSS persistent volumes support only the Retain reclaim policy. Deleting a PVC does not delete the PV or the underlying OSS bucket data.

Choose an authentication method

RRSA AccessKey
How it works Pods assume a RAM role via OIDC federation; credentials rotate automatically AccessKey ID and secret stored in a Kubernetes Secret
Security Higher — no long-lived secrets; fine-grained per-workload isolation Lower — static credentials; rotation requires a Pod restart
Cluster requirement v1.26+, CSI v1.30.4+ Any version
Best for Production workloads, clusters v1.26+, cross-account mounts Quick setup, development environments

Use RRSA for production clusters on version 1.26 or later. Use AccessKey for simpler setups or when RRSA version requirements are not met.

Step 1: Configure authentication credentials

Option A: RRSA (recommended)

1. Enable RRSA in your cluster

  1. On the ACK Clusters page, find the cluster and click its name. In the left-side pane, click Cluster Information.

  2. On the Basic Information tab, find the Security and Auditing section. Click Enable next to RRSA OIDC. Follow the on-screen prompts to complete enablement during off-peak hours. Wait for the cluster status to change from Updating to Running.

    Important

    After enabling RRSA, the maximum validity period for new ServiceAccount tokens in the cluster is 12 hours.

If you previously used RRSA with a CSI version earlier than v1.30.4, add the RAM role authorization configurations described in \[Product Change\] CSI ossfs version upgrade and mount process optimization.

2. Create a RAM role

Create a RAM role that Pods can assume to access the OSS bucket.

  1. Go to the Create Role page. Select Identity Provider as the Principal Type, then click Switch to Policy Editor.

  2. Select Identity Provider as the Principal and click Edit. Configure the settings below. Leave other parameters at their default values. For details, see Create a RAM role for an OIDC IdP. To use a different ServiceAccount or ARN, see How do I use specified ARNs or ServiceAccounts with RRSA authentication?

    Parameter Value
    Identity Provider Type OIDC
    Identity Provider ack-rrsa-<cluster_id> — replace <cluster_id> with your cluster ID
    Condition — Key oidc:sub
    Condition — Operator StringEquals
    Condition — Value system:serviceaccount:ack-csi-fuse:csi-fuse-ossfs (ack-csi-fuse is the fixed namespace for the ossfs client; csi-fuse-ossfs is the ServiceAccount name)
    Role Name demo-role-for-rrsa (example name)

3. Create an access policy

Following the least privilege principle, create a custom policy that grants the minimum required access to the target OSS bucket.

  1. Go to the Create Policy page. Switch to the JSON tab and enter one of the policy scripts below. Replace <myBucketName> with your actual bucket name. Read-only access:

    OSS read-only policy

    Replace <myBucketName> with your actual bucket name.
    {
        "Statement": [
            {
                "Action": [
                    "oss:Get*",
                    "oss:List*"
                ],
                "Effect": "Allow",
                "Resource": [
                    "acs:oss:*:*:<myBucketName>",
                    "acs:oss:*:*:<myBucketName>/*"
                ]
            }
        ],
        "Version": "1"
    }

    OSS read/write policy

    Replace <myBucketName> with your actual bucket name.
    {
        "Statement": [
            {
                "Action": "oss:*",
                "Effect": "Allow",
                "Resource": [
                    "acs:oss:*:*:<myBucketName>",
                    "acs:oss:*:*:<myBucketName>/*"
                ]
            }
        ],
        "Version": "1"
    }
  2. (Optional) If OSS objects are encrypted with a customer master key (CMK) managed by Key Management Service (KMS), also grant KMS permissions to the role. See Use a specified CMK ID managed by KMS for encryption.

To reuse an existing RAM role that already has OSS permissions, modify its trust policy instead. See Pod permission isolation based on RRSA.

4. Attach the policy to the RAM role

  1. Go to the Roles page. In the Actions column for the target role, click Grant Permissions.

  2. In the Policy section, search for and select the policy you created, then confirm.

Option B: AccessKey

1. Create a RAM user and AccessKey

  1. Go to the Create User page. Follow the prompts to create a RAM user. Skip this step if you already have one.

  2. Go to the Create Policy page. Switch to the JSON tab and enter one of the policy scripts below. Replace <myBucketName> with your actual bucket name. Read-only access:

    OSS read-only policy

    Replace <myBucketName> with the actual bucket name.
    {
        "Statement": [
            {
                "Action": [
                    "oss:Get*",
                    "oss:List*"
                ],
                "Effect": "Allow",
                "Resource": [
                    "acs:oss:*:*:<myBucketName>",
                    "acs:oss:*:*:<myBucketName>/*"
                ]
            }
        ],
        "Version": "1"
    }

    OSS read/write policy

    Replace <myBucketName> with the actual bucket name.
    {
        "Statement": [
            {
                "Action": "oss:*",
                "Effect": "Allow",
                "Resource": [
                    "acs:oss:*:*:<myBucketName>",
                    "acs:oss:*:*:<myBucketName>/*"
                ]
            }
        ],
        "Version": "1"
    }

    If you plan to create a PV from the ACK console, also add oss:ListBuckets on all resources:

    {
      "Effect": "Allow",
      "Action": "oss:ListBuckets",
      "Resource": "*"
    }
  3. (Optional) If OSS objects are encrypted with a CMK managed by KMS, also grant KMS permissions. See Use a specified CMK ID managed by KMS for encryption.

  4. Go to the Users page. In the Actions column for the target user, click Add Permissions. Search for and select the policy you created.

  5. On the Users page, click the target user. In the AccessKey section, click Create AccessKey. Follow the prompts and save the AccessKey ID and AccessKey secret securely.

Step 2: Create a StorageClass

A StorageClass defines the template used to automatically provision PVs.

RRSA method

  1. Create a file named sc-oss.yaml:

    apiVersion: storage.k8s.io/v1
    kind: StorageClass
    metadata:
      name: sc-oss
    parameters:
      bucket: <your-bucket-name>        # Replace with your actual bucket name
      path: /                           # Mount path relative to the bucket root; defaults to /
      url: "http://oss-cn-hangzhou-internal.aliyuncs.com"  # OSS endpoint
      authType: rrsa                    # Use RRSA authentication
      roleName: demo-role-for-rrsa      # RAM role created in Step 1
      otherOpts: >-
        -o umask=022
        -o max_stat_cache_size=100000
        -o allow_other
      volumeAs: sharepath               # Volume access mode
    provisioner: ossplugin.csi.alibabacloud.com
    reclaimPolicy: Retain
    volumeBindingMode: Immediate
  2. Apply the StorageClass:

    kubectl apply -f sc-oss.yaml

AccessKey method

kubectl

  1. Create a Secret in the same namespace as your application. Replace the placeholder values with the AccessKey ID and secret you obtained:

    kubectl create secret generic oss-secret \
      --from-literal='akId=<your-AccessKey-ID>' \
      --from-literal='akSecret=<your-AccessKey-secret>'
  2. Create a file named sc-oss.yaml:

    apiVersion: storage.k8s.io/v1
    kind: StorageClass
    metadata:
      name: sc-oss
    parameters:
      bucket: <your-bucket-name>        # Replace with your actual bucket name
      path: /                           # Mount path relative to the bucket root; defaults to /
      url: "http://oss-cn-hangzhou-internal.aliyuncs.com"  # OSS endpoint
      csi.storage.k8s.io/node-publish-secret-name: oss-secret        # Secret name
      csi.storage.k8s.io/node-publish-secret-namespace: default      # Secret namespace
      otherOpts: >-
        -o umask=022
        -o max_stat_cache_size=100000
        -o allow_other
    provisioner: ossplugin.csi.alibabacloud.com
    reclaimPolicy: Retain
    volumeBindingMode: Immediate
  3. Apply the StorageClass:

    kubectl apply -f sc-oss.yaml

Console

  1. Store the AccessKey as a Secret. On the Clusters page, click the cluster name. In the left navigation pane, choose Configurations > Secrets. Click Create from YAML and enter:

    apiVersion: v1
    kind: Secret
    metadata:
      name: oss-secret
      namespace: default   # Must match the application namespace
    stringData:
      akId: <your-AccessKey-ID>
      akSecret: <your-AccessKey-secret>
  2. In the left navigation pane, choose Volumes > StorageClasses. Click Create, set PV Type to OSS, and configure the parameters:

    Parameter Description
    Access Certificate The Secret that contains your AccessKey ID and secret
    Bucket ID The OSS bucket to mount. Only buckets accessible with the configured AccessKey are listed.
    OSS Path Mount path relative to the bucket root. Defaults to / (entire bucket). Requires CSI v1.14.8.32-c77e277b-aliyun or later.
    Volume Mode Shared Directory (sharepath) or Subdirectory (subpath). See the StorageClass parameter reference below.
    Endpoint The OSS endpoint. Use an internal endpoint for same-region or VPC access; use a public endpoint for cross-region access. Internal access defaults to HTTP. To use HTTPS, configure via kubectl.
    Reclaim Policy Fixed at Retain. Deleting the PVC does not delete the PV or bucket data.
    Optional Parameters Custom ossfs mount options in -o * -o * format.

StorageClass parameter reference

Parameter Required Default Description
bucket Yes The OSS bucket to mount.
path No / Mount path relative to the bucket root. If the ossfs version is earlier than 1.91, the path must already exist in the bucket. Requires CSI v1.14.8.32-c77e277b-aliyun or later.
url Yes The OSS endpoint. Use an internal endpoint (http://oss-<region>-internal.aliyuncs.com) when the cluster and bucket are in the same region or connected via VPC. Use a public endpoint (http://oss-<region>.aliyuncs.com) for cross-region access. The deprecated vpc100-oss-<region>.aliyuncs.com format is no longer supported — switch to the new format.
authType Yes (RRSA) Set to rrsa to use RRSA authentication. Omit for AccessKey authentication.
roleName Yes (RRSA) The RAM role to assume. To apply different permissions per StorageClass, create separate RAM roles and set different roleName values.
csi.storage.k8s.io/node-publish-secret-name Yes (AccessKey) The name of the Secret containing the AccessKey.
csi.storage.k8s.io/node-publish-secret-namespace Yes (AccessKey) The namespace of the Secret. Must match the application namespace.
otherOpts No Custom ossfs mount options in -o * -o * format. Common options: umask=022 sets ossfs file permissions to 755 (resolves permission mismatches for objects uploaded via the SDK or OSS console, which default to 640; recommended for read/write splitting or multi-user environments); max_stat_cache_size=100000 limits metadata cache entries in memory, improving ls and stat performance (this cache does not detect changes made outside ossfs; set to 0 to disable, or use stat_cache_expire to reduce the cache TTL); allow_other allows users other than the mounting user to access files. For all options, see Options supported by ossfs and ossfs 1.0 configuration best practices.
volumeAs No sharepath Volume access mode. sharepath: all PVs share the mount path; data stored at <bucket>:<path>/. subpath: a subdirectory is created per PV; data stored at <bucket>:<path>/<pv-name>/. subpath requires CSI v1.31.3 or later.
sigVersion No "v1" OSS request signature version. "v4" (Signature Version 4) is recommended over the default "v1" (Signature Version 1).
provisioner Yes Fixed at ossplugin.csi.alibabacloud.com for the Alibaba Cloud OSS CSI plug-in.
reclaimPolicy No Retain Fixed at Retain for OSS persistent volumes.
volumeBindingMode No Immediate OSS persistent volumes do not require zone-based node affinity. Use the default Immediate.

Step 3: Create a PVC

kubectl

  1. Create a file named pvc-oss.yaml:

    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
      name: pvc-oss
    spec:
      accessModes:
      - ReadOnlyMany       # ReadOnlyMany mounts the bucket read-only; use ReadWriteMany for read/write
      volumeMode: Filesystem
      resources:
        requests:
          storage: 20Gi    # Declared capacity — does not limit the actual OSS bucket size
      storageClassName: sc-oss

    Supported access modes: ReadOnlyMany and ReadWriteMany.

  2. Apply the PVC:

    kubectl apply -f pvc-oss.yaml
  3. Verify the PVC is bound:

    kubectl get pvc pvc-oss

    Expected output:

    NAME      STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   VOLUMEATTRIBUTESCLASS   AGE
    pvc-oss   Bound    oss-251d111d-3b0b-4879-81a0-eb5a19xxxxxx   20Gi       ROX            sc-oss         <unset>                 4d20h

    The Bound status confirms the PV was automatically created and attached.

Console

  1. In the left navigation pane, choose Volumes > Persistent Volume Claims. Click Create and set PVC Type to OSS.

  2. Configure the PVC parameters:

    Parameter Description
    Allocation Mode Select Use StorageClass
    Existing StorageClass Click Select and choose the StorageClass created in Step 2
    Capacity Declared storage capacity. Does not limit the actual OSS bucket size.
    Access Mode ReadOnlyMany or ReadWriteMany. ReadOnlyMany mounts the bucket in read-only mode.

Step 4: Deploy an application and mount the volume

kubectl

  1. Create a file named oss-static.yaml:

    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: oss-static
      labels:
        app: nginx
    spec:
      replicas: 2
      selector:
        matchLabels:
          app: nginx
      template:
        metadata:
          labels:
            app: nginx
        spec:
          containers:
          - name: nginx
            image: anolis-registry.cn-zhangjiakou.cr.aliyuncs.com/openanolis/nginx:1.14.1-8.6
            ports:
            - containerPort: 80
            volumeMounts:
            - name: pvc-oss
              mountPath: "/data"    # Mount path inside the container
            livenessProbe:
              exec:
                command:
                - ls
                - /data
              initialDelaySeconds: 30
              periodSeconds: 30
          volumes:
          - name: pvc-oss
            persistentVolumeClaim:
              claimName: pvc-oss   # References the PVC created in Step 3
  2. Create the Deployment:

    kubectl create -f oss-static.yaml
  3. Verify the Pods are running:

    kubectl get pod -l app=nginx
  4. Confirm the mount point inside a Pod:

    kubectl exec -it <pod-name> -- ls /data

    The output lists objects from the OSS mount path.

Console

  1. In the left-side pane, choose Workloads > Deployments. Click Create from Image.

  2. Configure the key parameters below. For other parameters, see Create a stateless workload (Deployment).

    Step Parameter Description
    Basic Information Replicas Number of replicas for the Deployment
    Container Image Name Container image address, such as anolis-registry.cn-zhangjiakou.cr.aliyuncs.com/openanolis/nginx:1.14.1-8.6
    Container Required Resources Required vCPU and memory
    Volume Add PVC Mount Source: select the PVC created in Step 3. Container Path: the path inside the container, such as /data.
    Advanced Pod Labels For example, key app, value nginx
  3. On the Deployments page, click the application name. On the Pods tab, confirm that all Pods show Status: Running.

Step 5: Verify shared and persistent storage

Verify shared storage

All Pods referencing the same PVC share the same OSS bucket path. Verify this by writing a file in one Pod and reading it from another.

  1. Get the Pod names:

    kubectl get pod -l app=nginx
  2. Create a test file in one Pod:

    • For ReadWriteMany: ``bash kubectl exec oss-static-66fbb85b67-d**** -- touch /data/tmpfile ``

    • For ReadOnlyMany: Upload tmpfile to the corresponding path in the OSS bucket using the OSS console or ossutil cp.

  3. Read the file from another Pod:

    If tmpfile does not appear, verify your CSI component version is v1.20.7 or later.
    kubectl exec oss-static-66fbb85b67-l**** -- ls /data | grep tmpfile

    Expected output:

    tmpfile

Verify persistent storage

Delete a Pod and confirm its data survives in the replacement Pod.

  1. Delete one Pod to trigger a restart:

    kubectl delete pod oss-static-66fbb85b67-d****
  2. Wait for the replacement Pod to reach Running:

    kubectl get pod -l app=nginx
  3. Check for the file in the new Pod:

    kubectl exec oss-static-66fbb85b67-z**** -- ls /data | grep tmpfile

    Expected output:

    tmpfile

    The file persists across Pod restarts because data is stored in OSS, not on the node.

What's next