All Products
Search
Document Center

Container Service for Kubernetes:Mount a static OSS volume by using ossfs 2.0 in ACK

Last Updated:Oct 29, 2025

For applications that require persistent storage or data sharing between pods, you can mount an Object Storage Service (OSS) bucket as an ossfs 2.0 volume using a statically provisioned Persistent Volume (PV) and a Persistent Volume Claim (PVC). This approach allows application containers to read and write data in OSS using standard POSIX interfaces, just like a local filesystem. It is ideal for scenarios such as big data analytics, AI training, and serving static content.

Compared to ossfs 1.0, ossfs 2.0 excels in sequential read and write performance, making it ideal for leveraging the high bandwidth of OSS.

For details on performance, see ossfs 2.0 client performance benchmarks.

Workflow overview

The process for mounting a statically provisioned ossfs 2.0 volume in a Container Service for Kubernetes (ACK) cluster is as follows:

image

  1. Choose an authentication method: Decide whether to use RAM Roles for Service Accounts (RRSA) or a static AccessKey and prepare the necessary credentials.

    Authentication method comparison

    • RRSA: Provides higher security by using auto-rotating, temporary credentials and supports pod-level permission isolation. This method is suitable for production and multi-tenant environments with high security requirements.

      If you use RRSA, first create and authorize a Resource Access Management (RAM) role specifically for OSS access.

    • AccessKey: This method is simple to configure but uses a long-term static key, which poses a security risk if exposed. This method is recommended for testing or development environments only.

      If you use AccessKey, first create a RAM user, obtain its AccessKey pair, and store the key pair as a Kubernetes Secret.

  2. Create a PV: Manually define a PV to register your existing OSS bucket with the cluster. This PV specifies the bucket's location (root directory or a specific subdirectory), storage capacity, and access modes.

  3. Create a PVC: Create a PVC that requests storage resources matching the PV you defined. Kubernetes will bind the PVC to the available PV.

  4. Mount the volume in your application: Mount the PVC as a volume in your container's specified directory.

Considerations

  • Workload suitability: ossfs 2.0 is designed for read-only and sequential-append write scenarios. For random or concurrent write scenarios, data consistency cannot be guaranteed. We recommend using ossfs 1.0 for these cases.

  • Data safety: Any modification or deletion of files in an ossfs mount point (either from within the pod or on the host node) is immediately synchronized with the source OSS bucket. To prevent accidental data loss, we recommend enabling versioning for the bucket.

  • Application health checks: Configure a health check (liveness probe) for pods that use OSS volumes. For example, verify that the mount point is still accessible. If the mount becomes unhealthy, the pod will be automatically restarted to restore connectivity.

  • Multipart upload management: When uploading large files (> 10 MB), ossfs automatically uses multipart uploads. If an upload is interrupted, incomplete parts will remain in the bucket. Manually delete these parts or configure a lifecycle rule to automatically clean up these parts to save storage costs.

Method 1: Authenticate using RRSA

By leveraging RRSA, you can achieve fine-grained, PV-level permission isolation for accessing cloud resources, which significantly reduces security risks. For details, see Use RRSA to authorize different pods to access different cloud services.

Prerequisites

Step 1: Create a RAM role

If you have mounted an OSS volume in the cluster using RRSA, skip this step. If this is your first time, follow these steps:

  1. Enable the RRSA feature in the ACK console.

  2. Create a RAM role for an OIDC IdP. This role will be assumed using RRSA.

    The following table lists the key parameters for the sample role demo-role-for-rrsa:

    Parameter

    Description

    Identity Provider Type

    Select OIDC.

    Identity Provider

    Select the provider associated with your cluster, such as ack-rrsa-<cluster_id>, where <cluster_id> is your actual cluster ID.

    Condition

    • oidc:iss: Keep the default value.

    • oidc:aud: Keep the default value.

    • oidc:sub: Manually add the following condition:

      • Key: Select oidc:sub.

      • Operator: Select StringEquals.

      • Value: Enter system:serviceaccount:ack-csi-fuse:csi-fuse-ossfs.

        In this value, ack-csi-fuse is the namespace where the ossfs client is located, and cannot be customized. csi-fuse-ossfs is the service account name, and can be changed as needed.

        For more information about how to modify the service account name, see FAQ about ossfs 2.0 volumes

    Role Name

    Enter demo-role-for-rrsa.

Step 2: Grant permissions to the demo-role-for-rrsa role

  1. Create a custom policy to grant OSS access permissions to the RAM user. For more information, see Create custom policies.

    Select the read-only policy or read-write policy based on your business requirements. Replace mybucket with the name of the bucket you created.

    • Policy that provides read-only permissions on OSS

      Click to view policy content

      {
          "Statement": [
              {
                  "Action": [
                      "oss:Get*",
                      "oss:List*"
                  ],
                  "Effect": "Allow",
                  "Resource": [
                      "acs:oss:*:*:mybucket",
                      "acs:oss:*:*:mybucket/*"
                  ]
              }
          ],
          "Version": "1"
      }
    • Policy that provides read-write permissions on OSS

      Click to view policy content

      {
          "Statement": [
              {
                  "Action": "oss:*",
                  "Effect": "Allow",
                  "Resource": [
                      "acs:oss:*:*:mybucket",
                      "acs:oss:*:*:mybucket/*"
                  ]
              }
          ],
          "Version": "1"
      }
  2. Optional. If the objects in the OSS bucket are encrypted by using a specified customer master key (CMK) in Key Management Service (KMS), you need to grant KMS access permissions to the RAM user. For more information, see Encryption.

  3. Grant the required permissions to the demo-role-for-rrsa role. For more information, see Grant permissions to a RAM role.

    Note

    If you want to use an existing RAM role that has OSS access permissions, you can modify the trust policy of the RAM role. For more information, see Use an existing RAM role and grant the required permissions to the RAM role.

Step 3: Create a PV

Create a PV to register an existing OSS bucket in the cluster.

  1. Create a file named ossfs2-pv.yaml with the following content. This PV definition tells Kubernetes how to access your OSS bucket using the RRSA role.

    The following PV mounts an OSS bucket named cnfs-oss-test as a 20 GB read-only file system for pods in the cluster to use.
    apiVersion: v1
    kind: PersistentVolume
    metadata:
      name: pv-ossfs2  # PV name
    spec:
      capacity:
        storage: 20Gi  # Define the volume capacity (this value is only for matching PVCs)
      accessModes:  # Access mode
        - ReadOnlyMany
      persistentVolumeReclaimPolicy: Retain
      csi:
        driver: ossplugin.csi.alibabacloud.com
        volumeHandle: pv-ossfs2  # Must be the same as the PV name (metadata.name)
        volumeAttributes:
          fuseType: ossfs2
          bucket: cnfs-oss-test # The name of the OSS Bucket
          path: /subpath  # The subdirectory to mount. Leave it empty to mount the root directory.
          url: oss-cn-hangzhou-internal.aliyuncs.com  # The endpoint of the region where the OSS Bucket is located
          otherOpts: "-o close_to_open=false"
          authType: "rrsa"
          roleName: "demo-role-for-rrsa"  # The RAM role created earlier
    • Parameters in volumeAttributes:

      Parameter

      Required

      Description

      fuseType

      Yes

      Specifies the client to use for the mount. Must be set to ossfs2 to use the ossfs 2.0 client.

      bucket

      Yes

      The name of the OSS bucket you want to mount.

      path

      No

      The subdirectory within the OSS bucket to mount. If not specified, the root of the bucket will be mounted by default.

      url

      Yes

      The access endpoint for the OSS bucket.

      • Use an internal endpoint if your cluster nodes and the bucket are in the same region, or if a Virtual Private Cloud (VPC) connection is established.

      • Use a public endpoint if the mount node and the bucket are in different regions.

      The following are common formats for different access endpoints:

      • Internal: http://oss-{{regionName}}-internal.aliyuncs.com or https://oss-{{regionName}}-internal.aliyuncs.com.

        The internal access endpoint format vpc100-oss-{{regionName}}.aliyuncs.com is deprecated. Switch to the new format as soon as possible.
      • Public: http://oss-{{regionName}}.aliyuncs.com or https://oss-{{regionName}}.aliyuncs.com.

      otherOpts

      No

      Additional mount options passed to the OSS volume, specified as a string of space-separated flags in the format -o *** -o ***. For example, -o close_to_open=false.

      The close-to-open option controls metadata consistency. When set to false (default), metadata is cached to improve performance for small file reads. When set to true, the client fetches fresh metadata from OSS by sending a GetObjectMeta request every time a file is opened, ensuring real-time consistency at the cost of increased latency and API calls.

      For more optional parameters, see ossfs 2.0 mount options.

      authType

      Yes

      Set to rrsa to declare that the RRSA authentication method is used.

      roleName

      Yes

      Set to the name of the RAM role you created or modified.

      To assign different permissions for different PVs, create a separate RAM role for each permission set. Then, in each PV's definition, use the roleName parameter to associate it with the corresponding role.

      To use specified ARNs or a ServiceAccount with the RRSA authentication method, see How do I use specified ARNs or a ServiceAccount with the RRSA authentication method?.
  2. Apply the PV definition.

    kubectl create -f ossfs2-pv.yaml
  3. Verify the PV status.

    kubectl get pv pv-ossfs2

    The output shows that the PV status is Available.

    NAME        CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM   STORAGECLASS   VOLUMEATTRIBUTESCLASS   REASON   AGE
    pv-ossfs2   20Gi       ROX            Retain           Available                          <unset>                          15s

Step 4: Create a PVC

Create a PVC to declare the persistent storage capacity required by the application.

  1. Create a file named ossfs2-pvc-static.yaml with the following YAML content.

    kind: PersistentVolumeClaim
    apiVersion: v1
    metadata:
      name: pvc-ossfs2  # PVC name
      namespace: default
    spec:
      # The following configuration must be consistent with the PV
      accessModes:
        - ReadOnlyMany
      resources:
        requests:
          storage: 20Gi
      volumeName: pv-ossfs2  # The PV to bind
  2. Create the PVC.

    kubectl create -f ossfs2-pvc-static.yaml
  3. Check the PVC status.

    kubectl get pvc pvc-ossfs2

    The output shows that the PVC is bound to the PV.

    NAME         STATUS   VOLUME      CAPACITY   ACCESS MODES   STORAGECLASS   VOLUMEATTRIBUTESCLASS   AGE
    pvc-ossfs2   Bound    pv-ossfs2   20Gi       ROX                           <unset>                 6s

Step 5: Create an application and mount the volume

You can now mount the PVC to an application.

  1. Create a file named ossfs2-test.yaml to define your application.

    The following YAML template creates a StatefulSet that contains one pod. The pod requests storage resources through a PVC named pvc-ossfs2 and mounts the volume to the /data path.

    YAML template

    apiVersion: apps/v1
    kind: StatefulSet
    metadata:
      name: ossfs2-test
      namespace: default
    spec:
      replicas: 1
      selector:
        matchLabels:
          app: ossfs2-test
      template:
        metadata:
          labels:
            app: ossfs2-test
        spec:
          containers:
          - name: nginx
            image: anolis-registry.cn-zhangjiakou.cr.aliyuncs.com/openanolis/nginx:1.14.1-8.6
            ports:
            - containerPort: 80
            volumeMounts:
            - name: pvc-ossfs2
              mountPath: /data
          volumes:
            - name: pvc-ossfs2
              persistentVolumeClaim:
                claimName: pvc-ossfs2  # The PVC to reference
  2. Deploy the application.

    kubectl create -f ossfs2-test.yaml
  3. Check the pod deployment status.

    kubectl get pod -l app=ossfs2-test

    Expected output:

    NAME            READY   STATUS    RESTARTS   AGE
    ossfs2-test-0   1/1     Running   0          12m
  4. Verify that the application can access the data in OSS.

    kubectl exec -it ossfs2-test-0 -- ls /data

    The output should show the data in the OSS mount path.

Method 2: Authenticate using an AccessKey

ACK supports authenticating OSS volume mounts by storing a static AccessKey in a Kubernetes Secret. This approach is suitable for scenarios where a specific application requires long-term, fixed access permissions.

  • If the AccessKey referenced by the PV is revoked or its permissions are changed, any application using the volume will immediately lose access and encounter permission errors. To restore access, update the credentials in the Secret, then restart the application's pods to force a remount. This process causes a brief service interruption and should only be performed during a scheduled maintenance window.

  • To avoid the downtime associated with manual key rotation, we strongly recommend using the RRSA authentication method instead.

Prerequisites

Step 1: Create a RAM user with OSS access permissions and obtain an AccessKey pair

Create a RAM user and grant permissions

  1. Create a RAM user. You can skip this step if you have an existing RAM user. For more information about how to create a RAM user, see Create a RAM user.

  2. Create a custom policy to grant OSS access permissions to the RAM user. For more information, see Create custom policies.

    Select the read-only policy or read-write policy based on your business requirements. Replace mybucket with the name of the bucket you created.

    • Policy that provides read-only permissions on OSS

      Click to view policy content

      {
          "Statement": [
              {
                  "Action": [
                      "oss:Get*",
                      "oss:List*"
                  ],
                  "Effect": "Allow",
                  "Resource": [
                      "acs:oss:*:*:mybucket",
                      "acs:oss:*:*:mybucket/*"
                  ]
              }
          ],
          "Version": "1"
      }
    • Policy that provides read-write permissions on OSS

      Click to view policy content

      {
          "Statement": [
              {
                  "Action": "oss:*",
                  "Effect": "Allow",
                  "Resource": [
                      "acs:oss:*:*:mybucket",
                      "acs:oss:*:*:mybucket/*"
                  ]
              }
          ],
          "Version": "1"
      }
  3. Optional. If the objects in the OSS bucket are encrypted by using a specified customer master key (CMK) in Key Management Service (KMS), you need to grant KMS access permissions to the RAM user. For more information, see Encryption.

  4. Grant OSS access permissions to the RAM user. For more information, see Grant permissions to a RAM user.

  5. Create an AccessKey pair for the RAM user. For more information, see Create an AccessKey pair.

Create a Secret to store the AccessKey credentials

Run the following command to create a Secret for OSS authentication. Replace akId and akSecret with your actual credentials.

kubectl create -n default secret generic oss-secret --from-literal='akId=xxxxxx' --from-literal='akSecret=xxxxxx'

Step 2: Create a PV

Register an existing OSS bucket in the cluster by creating a PV.

  1. Create a file named ossfs2-pv-ak.yaml with the following content. This PV definition includes a nodePublishSecretRef that points to the Secret you created.

    The following PV mounts an OSS bucket named cnfs-oss-test as a 20 GB read-only file system for pods in the cluster to use.
    apiVersion: v1
    kind: PersistentVolume
    metadata:
      name: pv-ossfs2  # PV name
    spec:
      capacity:
        storage: 20Gi  # Define the volume capacity (this value is only for matching PVCs)
      accessModes:  # Access mode
        - ReadOnlyMany
      persistentVolumeReclaimPolicy: Retain
      csi:
        driver: ossplugin.csi.alibabacloud.com
        volumeHandle: pv-ossfs2   # Must be the same as the PV name (metadata.name)
        # Use the Secret created earlier
        nodePublishSecretRef:
          name: oss-secret  # The name of the Secret that stores the AccessKey information
          namespace: default  # The namespace where the Secret is located
        volumeAttributes:
          fuseType: ossfs2
          bucket: cnfs-oss-test  # The name of the OSS Bucket
          path: /subpath  # The subdirectory to mount. Leave it empty to mount the root directory.
          url: oss-cn-hangzhou-internal.aliyuncs.com  # The endpoint of the region where the OSS Bucket is located
          otherOpts: "-o close_to_open=false"
    • Parameters in nodePublishSecretRef:

      Parameter

      Required

      Description

      name

      Yes

      The name of the Secret that stores the AccessKey information.

      namespace

      Yes

      The namespace where the Secret storing the AccessKey information is located.

    • Parameters in volumeAttributes:

      Parameter

      Required

      Description

      fuseType

      Yes

      Specifies the client to use for the mount. Must be set to ossfs2 to use the ossfs 2.0 client.

      bucket

      Yes

      The name of the OSS bucket you want to mount.

      path

      No

      The mount path of the OSS bucket, which represents the directory structure relative to the bucket root.

      url

      Yes

      The endpoint of the OSS bucket you want to mount. You can retrieve the endpoint from the Overview page of the bucket in the OSS console.

      • If the bucket is mounted to a node in the same region as the bucket or the bucket can be connected to the node through a virtual private cloud (VPC), use the internal endpoint of the bucket.

      • If the bucket is mounted to a node in a different region, use the public endpoint of the bucket.

      Public endpoints and internal endpoints have different formats:

      • Format of internal endpoints: http://oss-{{regionName}}-internal.aliyuncs.com or https://oss-{{regionName}}-internal.aliyuncs.com.

      • Format of public endpoints: http://oss-{{regionName}}.aliyuncs.com or https://oss-{{regionName}}.aliyuncs.com.

      Important

      The vpc100-oss-{{regionName}}.aliyuncs.com format for internal endpoints is deprecated.

      otherOpts

      No

      Enter custom parameters for the OSS volume in the format -o *** -o ***, for example, -o close_to_open=false.

      close-to-open: Disabled by default. If enabled, the system sends a GetObjectMeta request to OSS each time a file is opened to get the latest metadata. This ensures metadata is always up-to-date. However, in scenarios with many small file reads, frequent metadata queries can significantly increase access latency.

      For more optional parameters, see ossfs 2.0 mount options.

  2. Apply the PV definition.

    kubectl create -f ossfs2-pv.yaml
  3. Check the PV status.

    kubectl get pv pv-ossfs2

    The following output shows that the PV is Available:

    NAME        CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM   STORAGECLASS   VOLUMEATTRIBUTESCLASS   REASON   AGE
    pv-ossfs2   20Gi       ROX            Retain           Available                          <unset>                          15s

Step 3: Create a PVC

Create a PVC to declare the persistent storage capacity required by the application.

  1. Create a file named ossfs2-pvc-static.yaml to claim the PV.

    kind: PersistentVolumeClaim
    apiVersion: v1
    metadata:
      name: pvc-ossfs2  # PVC name
      namespace: default
    spec:
      # The following configuration must be consistent with the PV
      accessModes:
        - ReadOnlyMany
      resources:
        requests:
          storage: 20Gi
      volumeName: pv-ossfs2  # The PV to bind
  2. Create the PVC.

    kubectl create -f ossfs2-pvc-static.yaml
  3. Verify that the PVC is Bound to the PV.

    kubectl get pvc pvc-ossfs2

    Expected output:

    NAME         STATUS   VOLUME      CAPACITY   ACCESS MODES   STORAGECLASS   VOLUMEATTRIBUTESCLASS   AGE
    pvc-ossfs2   Bound    pv-ossfs2   20Gi       ROX                           <unset>                 6s

Step 4: Create an application and mount the volume

You can now mount the PVC to an application.

  1. Create a file named ossfs2-test.yaml to define your application.

    The following YAML template creates a StatefulSet that contains one pod. The pod requests storage resources through a PVC named pvc-ossfs2 and mounts the volume to the /data path.

    YAML template

    apiVersion: apps/v1
    kind: StatefulSet
    metadata:
      name: ossfs2-test
      namespace: default
    spec:
      replicas: 1
      selector:
        matchLabels:
          app: ossfs2-test
      template:
        metadata:
          labels:
            app: ossfs2-test
        spec:
          containers:
          - name: nginx
            image: anolis-registry.cn-zhangjiakou.cr.aliyuncs.com/openanolis/nginx:1.14.1-8.6
            ports:
            - containerPort: 80
            volumeMounts:
            - name: pvc-ossfs2
              mountPath: /data
          volumes:
            - name: pvc-ossfs2
              persistentVolumeClaim:
                claimName: pvc-ossfs2  # The PVC to reference
  2. Deploy the application.

    kubectl create -f ossfs2-test.yaml
  3. Check the pod deployment status.

    kubectl get pod -l app=ossfs2-test

    Expected output:

    NAME            READY   STATUS    RESTARTS   AGE
    ossfs2-test-0   1/1     Running   0          12m
  4. Verify that the application can access the data in OSS.

    kubectl exec -it ossfs2-test-0 -- ls /data

    The output should show the data in the OSS mount path.

Apply in production

Category

Best practices

Security and permissions

  • Prefer RRSA for authentication: It provides temporary, auto-rotating credentials through OpenID Connect (OIDC) and Security Token Service (STS) and enables fine-grained, pod-level permission isolation, significantly reducing the risk of credential leakage.

  • Follow the principle of least privilege: When creating RAM roles or users, grant only the minimum permissions required for the application to function.

Performance and cost

  • Optimize mount options (otherOpts):

    • Metadata caching (-o close_to_open=false): This is the default behavior. It caches file metadata, which reduces latency and API call costs, making it ideal for workloads that read many small files.

    • Real-time metadata (-o close_to_open=true): Use this only if another system frequently updates files in OSS and your Pod needs to see those changes immediately. This option increases latency and API costs.

    For more fine-grained performance optimization based on your business scenario, see ossfs 2.0 Mount Options.

  • Understand your workload:

    • ossfs 2.0 is best for read-heavy and append-only write patterns that do not rely on full POSIX-based APIs, such as AI training, inference, big data processing, and autonomous driving.

    • ossfs 2.0 is not suitable for random-write workloads that require frequent modification of file content, such as databases or collaborative editing applications.

  • Clean up incomplete uploads: If your application frequently aborts large file uploads, configure a lifecycle rule on your OSS bucket to automatically delete incomplete multipart uploads on a regular basis. This will prevent unmerged parts from accumulating and incurring unnecessary storage costs.

  • Use an internal endpoint: If your cluster and OSS bucket are in the same region, always use the internal endpoint to avoid public data transfer costs and reduce network latency.

O&M management

  • Configure health checks: Configure a liveness probe in your application pods to check the availability of the mount point. If the mount fails, Kubernetes will automatically restart the pod, triggering a remount.

  • Set up monitoring and alerting: Use container storage monitoring to track volume performance and capacity, and configure alerts to proactively identify issues

FAQ