All Products
Search
Document Center

Container Service for Kubernetes:Use FlexVolume to persist data based on OSS

Last Updated:Feb 24, 2025

When a node in your cluster stops running, data in the containers of stateful applications on the node may be lost, which compromises data reliability. To eliminate the risk of data loss, you can use persistent storage to persist data. This topic describes how to use an Object Storage Service (OSS) volume to persist data.

Background information

OSS is a secure, cost-effective, high-capacity, and highly reliable cloud storage service provided by Alibaba Cloud. You can mount an OSS bucket to multiple pods in a Container Service for Kubernetes (ACK) cluster.

Scenarios:

  • Average requirements for disk I/O.

  • Requirement for sharing data, including configuration files, images, and small video files.

OSS volume mounting procedure:

  1. Create an OSS bucket.

  2. Obtain the AccessKey ID and AccessKey secret of your Alibaba Cloud account.

  3. Create a persistent volume (PV) and persistent volume claim (PVC) with a Secret.

Prerequisites

Usage notes

  • The kubelet and the ossfs driver may be restarted when you upgrade the cluster. As a result, the mounted OSS directory becomes unavailable. In this case, you must recreate the pods to which the OSS volume is mounted. You can add health check settings in the YAML files of the pods and then restart the pods to remount the OSS volume when the OSS directory becomes unavailable.

  • The preceding issue is fixed in the latest component version.

Create a PV

  1. Run the following command to create the Secret:

    Replace <your AccessKey ID> and <your AccessKey Secret> in the following command with the actual AccessKey ID and AccessKey secret of your Alibaba Cloud account. To obtain the AccessKey pair of your Alibaba Cloud account, go to the ACK console, move your pointer over the user icon and click AccessKey.

    kubectl create secret generic osssecret --from-literal=akId='<your AccessKey ID>' --from-literal=akSecret='<your AccessKey Secret>' --type=alicloud/oss -n default

    osssecret: the name of the Secret. You can specify a custom name.

    akId: the AccessKey ID.

    akSecret: the AccessKey secret.

    --type: the type of Secret. In this example, the value is set to alicloud/oss. The Secret and the pod that uses the Secret must belong to the same namespace.

  2. Use the following pv-oss.yaml file to create a PV:

    apiVersion: v1
    kind: PersistentVolume
    metadata:
      name: pv-oss
      labels:
        alicloud-pvname: pv-oss
    spec:
      capacity:
        storage: 5Gi
      accessModes:
        - ReadWriteMany
      storageClassName: oss
      flexVolume:
        driver: "alicloud/oss"
        secretRef:
          name: "osssecret"  # Replace the value with the name of the Secret you created in the preceding step. 
        options:
          bucket: "docker"                        // Replace the value with the name of the OSS bucket. 
          path: /path                             // Replace the value with the relative path of the OSS subdirectory that you want to mount. 
          url: "oss-cn-hangzhou.aliyuncs.com"     // Replace the value with the endpoint of the OSS bucket. 
          otherOpts: "-o max_stat_cache_size=0 -o allow_other"   // Replace the value with custom parameter values.

    Parameters

    • alicloud-pvname: the name of the PV. You can specify the PV name in the selector field of a PVC to bind the PV to the PVC.

    • Bucket Name: The name of the OSS bucket.

    • path: the path relative to the root directory of the OSS bucket that you want to mount. Default value: /. This parameter is supported by csi-plugin 1.14.8.32-c77e277b-aliyun and later.

    • url: the endpoint of the OSS bucket. To obtain the endpoint, perform the following steps:

      1. Log on to the OSS console.

      2. In the left-side navigation pane, click Buckets. On the Buckets page, click the name of the bucket whose internal endpoint you want to obtain.

      3. In the left-side navigation tree, click Overview.

      4. In the Port section, you can view the endpoint of the bucket.

    • otherOpts: the custom parameters that are used to mount the OSS bucket. The parameters must be in the -o *** -o *** format.

  3. Run the following command to create the PV:

    kubectl create -f pv-oss.yaml

Expected output:

  1. Log on to the ACK console. In the left-side navigation pane, click Clusters.

  2. On the Clusters page, find the cluster that you want to manage and click its name. In the left-side pane, choose Volumes > Persistent Volumes.

  3. On the Persistent Volumes page, you can find the newly created PV.

Create a PVC

Create a PVC for the OSS bucket. Configure the selector parameter of the PVC to select the PV you created. This way, the PV is automatically bound to the PVC after the PVC is created. Set the storageClassName parameter to specify that only a PV of the OSS type can be bound to the PVC.

  1. Create a file named pvc-oss.yaml.

    kind: PersistentVolumeClaim
    apiVersion: v1
    metadata:
      name: pvc-oss
    spec:
      accessModes:
        - ReadWriteMany
      storageClassName: oss
      resources:
        requests:
          storage: 5Gi
      selector:
        matchLabels:
          alicloud-pvname: pv-oss
  2. Run the following command to create a PVC:

    kubectl create -f pvc-oss.yaml

Expected output:

  1. Log on to the ACK console. In the left-side navigation pane, click Clusters.

  2. On the Clusters page, find the cluster that you want to manage and click its name. In the left-side pane, choose Volumes > Persistent Volume Claims.

  3. On the Persistent Volume Claims page, you can find the newly created PVC.

Create an application

  1. Create a file named oss-static.yaml.

    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: oss-static
      labels:
        app: nginx
    spec:
      replicas: 1
      selector:
        matchLabels:
          app: nginx
      template:
        metadata:
          labels:
            app: nginx
        spec:
          containers:
          - name: nginx
            image: anolis-registry.cn-zhangjiakou.cr.aliyuncs.com/openanolis/nginx:1.14.1-8.6
            ports:
            - containerPort: 80
            volumeMounts:
              - name: pvc-oss
                mountPath: "/data"
              - name: pvc-oss
                mountPath: "/data1"
            livenessProbe:
              exec:
                command:
                - sh
                - -c
                - cd /data
              initialDelaySeconds: 30
              periodSeconds: 30
          volumes:
            - name: pvc-oss
              persistentVolumeClaim:
                claimName: pvc-oss
    Note

    You can set the livenessProbe field to configure health check settings. For more information, see OSS volumes.

  2. Run the following command to create a Deployment:

    kubectl create -f oss-static.yaml d

Expected output:

  1. Log on to the ACK console. In the left-side navigation pane, click Clusters.

  2. On the Clusters page, find the cluster that you want to manage and click its name. In the left-side pane, choose Workloads > Deployments.

  3. On the Deployments page, you can find the newly created Deployment.

Verify data persistence

  1. Run the following command to query the application pods:

    kubectl get pod

    Expected output:

    NAME                             READY   STATUS    RESTARTS   AGE
    oss-static-66fbb85b67-dqbl2      1/1     Running   0          1h
  2. Run the following command to query files in the /data path of the pod:

    kubectl exec oss-static-66fbb85b67-dqbl2 -- ls /data | grep tmpfile
    Note

    The output indicates that no file exists in the /data path.

  3. Run the following command to create a file named tmpfile in the /data path:

    kubectl exec oss-static-66fbb85b67-dqbl2 -- touch /data/tmpfile
  4. Run the following command to query files in the /data path of the pod:

    kubectl exec oss-static-66fbb85b67-dqbl2 -- ls /data | grep tmpfile

    Expected output:

    tmpfile
  5. Run the following command to delete the pod named oss-static-66fbb85b67-dqbl2:

    kubectl delete pod oss-static-66fbb85b67-dqbl2

    Expected output:

    pod "oss-static-66fbb85b67-dqbl2" deleted
  6. Open another terminal window and run the following command to view how the pod is deleted and recreated:

    kubectl get pod -w -l app=nginx

    Expected output:

    NAME                             READY   STATUS    RESTARTS   AGE
    oss-static-66fbb85b67-dqbl2      1/1     Running   0          78m
    oss-static-66fbb85b67-dqbl2   1/1   Terminating   0     78m
    oss-static-66fbb85b67-zlvmw   0/1   Pending   0     <invalid>
    oss-static-66fbb85b67-zlvmw   0/1   Pending   0     <invalid>
    oss-static-66fbb85b67-zlvmw   0/1   ContainerCreating   0     <invalid>
    oss-static-66fbb85b67-dqbl2   0/1   Terminating   0     78m
    oss-static-66fbb85b67-dqbl2   0/1   Terminating   0     78m
    oss-static-66fbb85b67-dqbl2   0/1   Terminating   0     78m
    oss-static-66fbb85b67-zlvmw   1/1   Running   0     <invalid>
  7. Run the following command to query the name of the recreated pod:

    kubectl get pod

    Expected output:

    NAME                             READY   STATUS    RESTARTS   AGE
    oss-static-66fbb85b67-zlvmw      1/1     Running   0          40s
  8. Run the following command to check whether the tmpfile file still exists in the /data path. If the tmpfile still exists in the /data path, data is persisted to the OSS volume.

    kubectl exec oss-static-66fbb85b67-zlvmw -- ls /data | grep tmpfile

    Expected output:

    tmpfile