All Products
Search
Document Center

Container Compute Service:Mount a statically provisioned OSS volume

Last Updated:Nov 19, 2025

If your applications need to store unstructured data, such as images, audio files, and video files, you can mount Object Storage Service (OSS) volumes to your applications as persistent volumes (PVs). This topic describes how to mount a statically provisioned OSS volume to an application and how to verify that the OSS volume can be used to share and persist data.

Background information

OSS is a secure, cost-effective, high-capacity, and highly reliable cloud storage service provided by Alibaba Cloud. OSS is suitable for storing unstructured data that is not frequently modified, such as images, audio files, and video files. For more information, see Storage overview.

OSS volume clients

OSS volumes can be mounted locally as a file system using a client based on Filesystem in Userspace (FUSE). Compared to traditional local storage and block storage, FUSE-based clients have some limitations in terms of POSIX compatibility. ACS supports the following OSS volume clients.

Scenarios

Client

Type

Description

Most scenarios, such as read/write operations or scenarios that require user permission configuration.

ossfs 1.0

FUSE

Supports most POSIX operations, including append writes, random writes, and user permission settings.

Read-only or sequential append-only write scenarios, such as AI training, inference, big data processing, and autonomous driving.

ossfs 2.0

FUSE

Supports full reads and sequential append writes. It is suitable for read-intensive scenarios, such as AI training, inference, big data processing, and autonomous driving, and can significantly improve data read performance.

ossfs 2.0 currently supports only GPU computing power. To use CPU computing power, submit a ticket to apply.
  • If you are unsure about the read and write model of your application, use ossfs 1.0. ossfs 1.0 offers better POSIX compatibility and ensures stable application operations.

  • For scenarios where read and write operations can be separated, such as when read and write operations are not performed at the same time or are performed on different files (for example, for breakpoint saving or persistent log saving), you can use different volumes. For example, you can use an ossfs 2.0 volume to mount a read-only path and an ossfs 1.0 volume to mount a write path.

POSIX API support

The following table describes the support for common POSIX APIs that are provided by ossfs 1.0 and ossfs 2.0.

POSIX API support

Category

Operation/Feature

ossfs 1.0

ossfs 2.0

Basic file operations

open

Supported

Support

flush

Supported

Supported

close

Supported

Supported

File reads and writes

read

Supported

Supported

write

Supports random writes (requires disk cache)

Supports only sequential writes (no disk cache required)

truncate

Supported (file size can be adjusted)

Supports only emptying the file

File metadata operations

create

Supported

Supported

unlink

Supported

Supported

rename

Supported

Supported

Directory operations

mkdir

Supported

Supported

readdir

Support

Supported

rmdir

Supported

Supported

Permissions and properties

getattr

Supported

Supported

chmod

Supported

Supported (The operation completes without an error, but the setting does not take effect)

chown

Supported

Supported (The operation completes without an error, but the setting does not take effect)

utimes

Supported

Supported

Extended features

setxattr

Supported

Not supported

symlink

Supported

Not supported

lock

Not supported

Not supported

Performance benchmarks

ossfs 2.0 provides significant performance improvements over ossfs 1.0 in sequential reads and writes and in concurrent reads of small files.

  • Sequential write performance: In single-threaded sequential write scenarios for large files, ossfs 2.0 increases bandwidth by nearly 18 times compared to ossfs 1.0.

  • Sequential read performance

    • In single-threaded sequential read scenarios for large files, ossfs 2.0 increases bandwidth by about 8.5 times compared to ossfs 1.0.

    • In multi-threaded (4 threads) sequential read scenarios for large files, ossfs 2.0 increases bandwidth by more than 5 times compared to ossfs 1.0.

  • Concurrent small file read performance: In high-concurrency (128 threads) scenarios for reading small files, ossfs 2.0 increases bandwidth by more than 280 times compared to ossfs 1.0.

If the read and write performance, such as latency and throughput, does not meet your requirements, see Best practices for optimizing the performance of OSS volumes.

Prerequisites

The managed-csiprovisioner component is installed in the ACS cluster.

Note

Go to the ACS cluster management page in the ACS console. In the left-side navigation pane of the cluster management page, choose Operations > Add-ons. On the Storage tab, you can check whether managed-csiprovisioner is installed.

Usage notes

Note

The following notes apply mainly to general read and write scenarios (ossfs 1.0). They are generally not applicable to the ossfs 2.0 client because it supports only some POSIX operations (mainly read operations).

  • ACS supports only statically provisioned OSS volumes. Dynamically provisioned OSS volumes are not supported.

  • Random or append writes involve creating a new file locally and re-uploading it to the OSS server. Because of the storage characteristics of OSS, note the following:

    • The rename operation for files and folders is not atomic.

    • Avoid concurrent writes or performing operations such as compression and decompression directly in the mount path.

      Important

      In multi-write scenarios, you must coordinate the behavior of each client. ACS does not guarantee data consistency for metadata or data issues caused by write operations.

In addition, note the following limitations:

  • Hard links are not supported.

  • You cannot mount buckets with a StorageClass of Archive Storage, Cold Archive, or Deep Cold Archive.

  • For ossfs 1.0 volumes, the readdir operation sends many headObject requests by default to obtain extended information about all objects in the path. When the destination path contains many files, the overall performance of ossfs may be affected. If file permissions and other properties are not critical in your read and write scenarios, you can enable the -o readdir_optimize parameter to optimize performance. For more information, see New readdir optimization feature.

Create an OSS bucket and obtain the bucket information

  1. Create an OSS bucket.

    1. Log on to the OSS console. In the navigation pane on the left, click Buckets.

    2. Click Create Bucket.

    3. In the panel that appears, configure the parameters for the OSS bucket and click Create.

      The following table describes the key parameters. For more information, see Create buckets.

      Parameter

      Description

      Bucket Name

      Enter a custom name. The name must be globally unique within OSS and cannot be changed after the bucket is created. For more information about the format requirements, see the on-screen instructions.

      Region

      Select Region-specific and select the region where the ACS cluster resides. This allows pods in the ACS cluster to access the OSS bucket over the internal network.

  2. (Optional) To mount a subdirectory of the OSS bucket, you can create the subdirectory.

    1. On the Buckets page, click the name of the destination bucket.

    2. In the navigation pane on the left of the bucket details page, choose Files > Objects.

    3. Click Create Directory to create the required directories in the OSS bucket.

  3. Obtain the endpoint of the OSS bucket.

    1. On the Buckets page, click the name of the destination bucket.

    2. On the bucket details page, click the Overview tab. In the Port section, copy the endpoint.

      • If the OSS bucket and the ACS cluster are in the same region, copy the VPC endpoint.

      • If the bucket is region-agnostic or is in a different region from the ACS cluster, copy the public endpoint.

  4. Obtain an AccessKey ID and an AccessKey secret to authorize access to OSS. For more information, see Obtain an AccessKey pair.

    Note

    To mount an OSS bucket that belongs to another Alibaba Cloud account, you must obtain the AccessKey pair from that account.

Mount an OSS volume

ossfs 1.0 volumes

kubectl

Step 1: Create a PV

  1. Save the following YAML content as oss-pv.yaml.

    apiVersion: v1
    kind: Secret
    metadata:
      name: oss-secret
      namespace: default
    stringData:
      akId: <your AccessKey ID>
      akSecret: <your AccessKey Secret>
    ---
    apiVersion: v1
    kind: PersistentVolume
    metadata:
      name: oss-pv
      labels:
        alicloud-pvname: oss-pv
    spec:
      storageClassName: test 
      capacity:
        storage: 20Gi
      accessModes:
        - ReadWriteMany
      persistentVolumeReclaimPolicy: Retain
      csi:
        driver: ossplugin.csi.alibabacloud.com
        volumeHandle: oss-pv
        nodePublishSecretRef:
          name: oss-secret
          namespace: default
        volumeAttributes:
          bucket: "<your OSS Bucket Name>"
          url: "<your OSS Bucket Endpoint>"
          otherOpts: "-o umask=022 -o allow_other"
    Note

    The preceding YAML creates a secret and a PV. The secret stores the AccessKey pair to be securely used by the PV. Replace the value of akId with your AccessKey ID and the value of akSecret with your AccessKey secret.

    The following table describes the PV parameters.

    Parameter

    Description

    alicloud-pvname

    The label of the PV. This is used to bind a PVC.

    storageClassName

    This configuration is used only to bind a PVC. You do not need to associate an actual StorageClass.

    storage

    The storage capacity of the OSS volume.

    Note

    The capacity of a statically provisioned OSS volume is for declaration purposes only. The actual capacity is not limited. The available capacity is subject to the amount displayed in the OSS console.

    accessModes

    The access mode.

    persistentVolumeReclaimPolicy

    The reclaim policy.

    driver

    The driver type. Here, it is set to ossplugin.csi.alibabacloud.com, which indicates that the Alibaba Cloud OSS CSI plug-in is used.

    volumeHandle

    The unique identifier of the PV. It must be consistent with metadata.name.

    nodePublishSecretRef

    Obtains the AccessKey pair from the specified secret for authorization.

    bucket

    The name of the OSS bucket. Replace the value of bucket with the actual name of your OSS bucket.

    url

    The endpoint of the OSS bucket. Replace the value of url with the actual endpoint of your OSS bucket.

    • If the OSS bucket and the ACS cluster are in the same region, use the VPC endpoint. For example, oss-cn-shanghai-internal.aliyuncs.com.

    • If the bucket is region-agnostic or is in a different region from the ACS cluster, use the public endpoint. For example, oss-cn-shanghai.aliyuncs.com.

    otherOpts

    Enter custom parameters for the OSS volume in the format -o *** -o ***, such as -o umask=022 -o max_stat_cache_size=100000 -o allow_other.

    View description

    • umask: Changes the read permissions for ossfs files.

      For example, umask=022 changes the permissions of ossfs files to 755. This resolves permission issues for files uploaded through other methods, such as the SDK or the OSS console, which have a default permission of 640. We recommend configuring this parameter for read/write splitting or multi-user access.

    • max_stat_cache_size: Sets the upper limit for metadata cache entries (for example, 100000). It caches object metadata in memory to improve the performance of operations such as ls and stat.

      However, this cache cannot promptly detect file modifications made through the OSS console, SDK, or ossutil. This may cause the application to read inconsistent data. If you have strict data consistency requirements, set this parameter to 0 (to disable the cache) or lower the cache expiration time with the stat_cache_expire parameter. This will reduce read performance.

    • allow_other: Allows users other than the mounting user to access files and directories in the mount target. This is suitable for multi-user shared environments where non-mounting users also need to access data.

    For more optional parameters, see Options supported by ossfs and ossfs 1.0 configuration best practices.

  2. Run the following command to create the secret and PV:

    kubectl create -f oss-pv.yaml
  3. Check the status of the PV.

    kubectl get pv

    Expected output:

    NAME     CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM   STORAGECLASS   VOLUMEATTRIBUTESCLASS   REASON   AGE
    oss-pv   20Gi       RWX            Retain           Available           test           <unset>                          9s

Step 2: Create a PVC

  1. Save the following YAML content as oss-pvc.yaml.

    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
      name: oss-pvc
    spec:
      storageClassName: test
      accessModes:
        - ReadWriteMany
      resources:
        requests:
          storage: 20Gi
      selector:
        matchLabels:
          alicloud-pvname: oss-pv

    The following table describes the parameters.

    Parameter

    Description

    storageClassName

    This configuration is used only to bind a PV. You do not need to associate an actual StorageClass. It must be consistent with the spec.storageClassName of the PV.

    accessModes

    The access mode.

    storage

    The storage capacity allocated to the pod. It cannot be higher than the capacity of the OSS volume.

    alicloud-pvname

    The label of the PV to bind. It must be consistent with the metadata.labels.alicloud-pvname of the PV.

  2. You can create a PVC.

    kubectl create -f oss-pvc.yaml
  3. You can check the PVC.

    kubectl get pvc

    The following output confirms that the PVC is bound to the PV that you created in Step 1.

    NAME      STATUS   VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS   VOLUMEATTRIBUTESCLASS   AGE
    oss-pvc   Bound    oss-pv   20Gi       RWX            test           <unset>                 6s

Step 3: Create an application and mount the OSS volume

  1. Save the following YAML content as oss-test.yaml.

    The following YAML example creates a deployment with two pods. Both pods request storage resources using the PVC named oss-pvc. The mount path for both is /data.

    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: oss-test
      labels:
        app: nginx
    spec:
      replicas: 2
      selector:
        matchLabels:
          app: nginx
      template:
        metadata:
          labels:
            app: nginx
        spec:
          containers:
          - name: nginx
            image: registry.cn-hangzhou.aliyuncs.com/acs-sample/nginx:latest
            ports:
            - containerPort: 80
            volumeMounts:
              - name: pvc-oss
                mountPath: /data
          volumes:
            - name: pvc-oss
              persistentVolumeClaim:
                claimName: oss-pvc
  2. Run the following command to create the deployment and mount the OSS volume:

    kubectl create -f oss-test.yaml
  3. Check the status of the pods in the deployment.

    kubectl get pod | grep oss-test

    The following example output shows that two pods are created.

    oss-test-****-***a   1/1     Running   0          28s
    oss-test-****-***b   1/1     Running   0          28s
  4. View the mount path.

    The following command is an example. By default, the directory is empty and no output is returned.

    kubectl exec oss-test-****-***a -- ls /data

Console

Step 1: Create a PV

  1. Log on to the ACS console.

  2. On the Clusters, click the name of the cluster to go to the cluster management page.

  3. In the left-side navigation pane of the cluster management page, choose Volumes > Persistent Volumes.

  4. On the Persistent Volumes page, click Create.

  5. In the Create Persistent Volume dialog box, configure the parameters and click Create.

    Parameter

    Description

    Example

    PV Type

    Select OSS.

    OSS

    Name

    Enter a custom name for the PV. For more information about the format requirements, see the on-screen instructions.

    oss-pv

    Capacity

    The storage capacity of the OSS volume.

    Note

    The capacity of a statically provisioned OSS volume is for declaration purposes only. The actual capacity is not limited. The available capacity is subject to the amount displayed in the OSS console.

    20 Gi

    Access Mode

    Select one of the following options as needed:

    • ReadOnlyMany: The volume can be mounted by multiple pods in read-only mode.

    • ReadWriteMany: The volume can be mounted by multiple pods in read/write mode.

    ReadWriteMany

    Access Certificate

    To ensure security, save the AccessKey information in a secret. This topic uses Create Secret as an example.

    • Create Secret

    • Namespace: default

    • Name: oss-secret

    • AccessKey ID: ********

    • AccessKey Secret: ********

    Bucket ID

    Select an OSS bucket.

    oss-acs-***

    OSS Path

    The directory to mount. The root directory (/) is mounted by default. You can mount a subdirectory (such as /dir) as needed. Make sure the subdirectory already exists.

    /

    Endpoint

    The endpoint of the OSS bucket.

    • If the OSS bucket and the ACS cluster are in the same region, select Internal Endpoint.

    • If the bucket is region-agnostic or is in a different region from the ACS cluster, select Public Endpoint.

    Private Domain Name

    After the PV is created, you can view its information on the Persistent Volumes page. The PV is not yet bound to a PVC.

Step 2: Create a PVC

  1. In the left-side navigation pane of the cluster management page, choose Volumes > Persistent Volume Claims.

  2. On the Persistent Volume Claims page, click Create.

  3. In the Create Persistent Volume Claim dialog box, configure the parameters and click Create.

    Parameter

    Description

    Example

    PVC Type

    Select OSS.

    OSS

    Name

    Enter a custom name for the PVC. For more information about the format requirements, see the on-screen instructions.

    oss-pvc

    Allocation Mode

    Select Existing Volume.

    Existing Volume

    Existing Volume

    Select the PV that you created earlier.

    oss-pv

    Total

    The storage capacity allocated to the pod. It cannot be higher than the capacity of the OSS volume.

    20 Gi

    After the PVC is created, you can view its details on the Persistent Volume Claims page. The PVC is bound to the corresponding Persistent Volume (PV), which is the OSS volume.

Step 3: Create an application and mount the OSS volume

  1. In the left-side navigation pane of the cluster management page, choose Workloads > Deployments.

  2. On the Stateless page, click Create From Image.

  3. Configure the parameters for the deployment and click Create.

    The following table describes the key parameters. Keep the default values for other parameters. For more information, see Create a stateless application from a Deployment.

    Configuration page

    Parameter

    Description

    Example

    Basic Information

    Application Name

    Enter a custom name for the deployment. For more information about the format requirements, see the on-screen instructions.

    oss-test

    Number Of Replicas

    Configure the number of replicas for the deployment.

    2

    Container Configuration

    Image Name

    Enter the address of the image used to deploy the application.

    registry.cn-hangzhou.aliyuncs.com/acs-sample/nginx:latest

    Required Resources

    Set the required vCPU and memory resources.

    0.25 vCPU, 0.5 GiB

    Volume

    Click Add Cloud Storage Claim and configure the parameters.

    • Mount Source: Select the PVC that you created earlier.

    • Container Path: Enter the container path to which you want to mount the OSS bucket.

    • Mount Source: oss-pvc

    • Container Path: /data

  4. Check the status of the application deployment.

    1. On the Stateless page, click the application name.

    2. On the Pods tab, confirm that the pods are in the Running state.

ossfs 2.0 volumes

You can mount statically provisioned ossfs 2.0 volumes only using kubectl. This operation is not supported in the ACS console.

Step 1: Create a PV

  1. Save the following YAML content as oss-pv.yaml.

    apiVersion: v1
    kind: Secret
    metadata:
      name: oss-secret
      namespace: default
    stringData:
      akId: <your AccessKey ID>
      akSecret: <your AccessKey Secret>
    ---
    apiVersion: v1
    kind: PersistentVolume
    metadata:
      name: oss-pv
      labels:
        alicloud-pvname: oss-pv
    spec:
      storageClassName: test 
      capacity:
        storage: 20Gi
      accessModes:
        - ReadWriteMany
      persistentVolumeReclaimPolicy: Retain
      csi:
        driver: ossplugin.csi.alibabacloud.com
        volumeHandle: oss-pv
        nodePublishSecretRef:
          name: oss-secret
          namespace: default
        volumeAttributes:
          fuseType: ossfs2 # Explicitly declares the use of the ossfs 2.0 client
          bucket: "<your OSS Bucket Name>"
          url: "<your OSS Bucket Endpoint>"
          otherOpts: "-o close_to_open=false" # Note: The supported mount parameters are not compatible with the ossfs 1.0 client.
    Note

    The preceding YAML creates a secret and a PV. The secret stores the AccessKey pair to be securely used by the PV. Replace the value of akId with your AccessKey ID and the value of akSecret with your AccessKey secret.

    The following table describes the PV parameters.

    Parameter

    Description

    alicloud-pvname

    The label of the PV. This is used to bind a PVC.

    storageClassName

    This configuration is used only to bind a PVC. You do not need to associate an actual StorageClass.

    storage

    The storage capacity of the OSS volume.

    Note

    The capacity of a statically provisioned OSS volume is for declaration purposes only. The actual capacity is not limited. The available capacity is subject to the amount displayed in the OSS console.

    accessModes

    The access mode.

    persistentVolumeReclaimPolicy

    The reclaim policy.

    driver

    The driver type. Here, it is set to ossplugin.csi.alibabacloud.com, which indicates that the Alibaba Cloud OSS CSI plug-in is used.

    volumeHandle

    The unique identifier of the PV. It must be consistent with metadata.name.

    nodePublishSecretRef

    Obtains the AccessKey pair from the specified secret for authorization.

    fuseType

    Specifies the client to use for the mount. Must be set to ossfs2 to use the ossfs 2.0 client.

    bucket

    The name of the OSS bucket. Replace the value of bucket with the actual name of your OSS bucket.

    url

    The endpoint of the OSS bucket. Replace the value of url with the actual endpoint of your OSS bucket.

    • If the OSS bucket and the ACS cluster are in the same region, use the VPC endpoint. For example, oss-cn-shanghai-internal.aliyuncs.com.

    • If the bucket is region-agnostic or is in a different region from the ACS cluster, use the public endpoint. For example, oss-cn-shanghai.aliyuncs.com.

    otherOpts

    Enter custom parameters for the OSS volume in the -o *** -o *** format. For example, -o close_to_open=false.

    close_to_open: Disabled by default. If enabled, the system sends a GetObjectMeta request to OSS each time a file is opened to obtain the latest metadata of the file in OSS. This ensures real-time metadata. However, in scenarios that require reading many small files, frequent metadata queries significantly increase access latency.

    For more information about optional parameters, see ossfs 2.0 mount options.

  2. Run the following command to create the secret and PV:

    kubectl create -f oss-pv.yaml
  3. Check the status of the PV.

    kubectl get pv

    Expected output:

    NAME     CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM   STORAGECLASS   VOLUMEATTRIBUTESCLASS   REASON   AGE
    oss-pv   20Gi       RWX            Retain           Available           test           <unset>                          9s

Step 2: Create a PVC

  1. Save the following YAML content as oss-pvc.yaml.

    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
      name: oss-pvc
    spec:
      storageClassName: test
      accessModes:
        - ReadWriteMany
      resources:
        requests:
          storage: 20Gi
      selector:
        matchLabels:
          alicloud-pvname: oss-pv

    The parameters are described in the following table.

    Parameter

    Description

    storageClassName

    This configuration is used only to bind a PV. You do not need to associate an actual StorageClass. It must be consistent with the spec.storageClassName of the PV.

    accessModes

    The access mode.

    storage

    The storage capacity allocated to the pod. It cannot be higher than the capacity of the OSS volume.

    alicloud-pvname

    The label of the PV to bind. It must be consistent with the metadata.labels.alicloud-pvname of the PV.

  2. You can create a PVC.

    kubectl create -f oss-pvc.yaml
  3. Check the status of the PVC.

    kubectl get pvc

    The following output indicates that the PVC is bound to the PV that you created in Step 1.

    NAME      STATUS   VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS   VOLUMEATTRIBUTESCLASS   AGE
    oss-pvc   Bound    oss-pv   20Gi       RWX            test           <unset>                 6s

Step 3: Create an application and mount the OSS volume

  1. Save the following YAML content as oss-test.yaml.

    The following YAML example creates a deployment with two pods. Both pods request storage resources using the PVC named oss-pvc. The mount path for both is /data.

    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: oss-test
      labels:
        app: nginx
    spec:
      replicas: 2
      selector:
        matchLabels:
          app: nginx
      template:
        metadata:
          labels:
            app: nginx
        spec:
          containers:
          - name: nginx
            image: registry.cn-hangzhou.aliyuncs.com/acs-sample/nginx:latest
            ports:
            - containerPort: 80
            volumeMounts:
              - name: pvc-oss
                mountPath: /data
          volumes:
            - name: pvc-oss
              persistentVolumeClaim:
                claimName: oss-pvc
  2. Run the following command to create the deployment and mount the OSS volume:

    kubectl create -f oss-test.yaml
  3. Check the status of the pods in the deployment.

    kubectl get pod | grep oss-test

    The following example output shows that two pods are created.

    oss-test-****-***a   1/1     Running   0          28s
    oss-test-****-***b   1/1     Running   0          28s
  4. View the mount path.

    The following command is an example. By default, the directory is empty and no output is returned.

    kubectl exec oss-test-****-***a -- ls /data

Verify that the OSS volume can share and persist data

The deployment that you created provisions two pods. The same OSS bucket is mounted to both pods. You can use the following methods to verify that the OSS volume can be used to share and persist data:

  • Create a file in one pod and check whether the file can be accessed from the other pod. If the file can be accessed, data sharing is enabled.

  • Recreate the deployment. Access the OSS volume from a new pod to check whether the original data still exists in the OSS bucket. If the data still exists, data persistence is enabled.

  1. View the pod information.

    kubectl get pod | grep oss-test

    Sample output:

    oss-test-****-***a   1/1     Running   0          40s
    oss-test-****-***b   1/1     Running   0          40s
  2. Verify the shared storage.

    1. Create a file in one of the pods.

      In this example, the pod named oss-test-****-***a is used:

      kubectl exec oss-test-****-***a -- touch /data/test.txt
    2. View the file from the other pod.

      In this example, the pod named oss-test-****-***b is used:

      kubectl exec oss-test-****-***b -- ls /data

      The following output shows that the new file test.txt is shared.

      test.txt
  3. Verify that data persists after the pods are recreated.

    1. Delete and then recreate the deployment.

      kubectl rollout restart deploy oss-test
    2. View the pods and wait for the new pods to be created.

      kubectl get pod | grep oss-test

      Sample output:

      oss-test-****-***c   1/1     Running   0          67s
      oss-test-****-***d   1/1     Running   0          49s
    3. From a new pod, check whether the data in the file system still exists.

      In this example, the pod named oss-test-c*** is used:

      kubectl exec oss-test-****-***c -- ls /data

      The following output shows that the data in the OSS bucket still exists and can be retrieved from the mount directory in the new pod.

      test.txt