Object Storage Service (OSS) is a secure, cost-effective, and high-reliability cloud storage service provided by Alibaba Cloud. OSS allows you to store a large amount of data in the cloud. This topic describes how to mount an OSS bucket as a statically provisioned volume in the console or by using a CLI.

Prerequisites

Background information

OSS is a secure, cost-effective, high-capacity, and high-reliability cloud storage service provided by Alibaba Cloud. You can mount an OSS bucket to multiple pods of an ACK cluster. OSS is applicable to the following scenarios:
  • Average requirements for disk I/O.
  • Sharing of data, including configuration files, images, and small video files.

Precautions

  • OSS buckets do not support dynamically provisioned persistent volumes (PVs). We recommend that you do not use OSS buckets across accounts.
  • kubelet and the OSSFS driver may be restarted when the ACK cluster is upgraded. As a result, the mounted OSS directory becomes unavailable. In this case, you must recreate the pods to which the OSS volume is mounted. You can add health check settings in the YAML file to restart the pods and remount the OSS volume when the OSS directory becomes unavailable.
    Note If the csi-plugin and csi-provisioner that you use are V1.18.8.45 or later, the preceding issue does not occur.
  • If the securityContext.fsgroup parameter is set in the application template, kubelet performs the chmod or chown operation after the volume is mounted, which increases the time consumption.
    Note For more information about how to speed up the mounting process when the securityContext.fsgroup parameter is set, see Why does it require a long period of time to mount an OSS volume?.
  • We recommend that you create no more than 1,000 files in the path directory. The OSSFS driver requires excess memory when the number of files is too large, which may cause Out Of Memory (OOM) errors in pods.
  • OSSFS is applicable for concurrent read operations on persistent volume claims (PVCs) and PVs. However, OSSFS cannot be used for concurrent write operations on PVCs and PVs. If use OSSFS for concurrent write operations on PVCs and PVs, data consistency cannot be ensured.
  • If you want to upload a file larger than 10 MB to OSS, you can split the file into multiple parts and separately upload the parts. To avoid incurring additional storage fees if a multipart upload task is interrupted, you can use one of the following methods to delete the parts that are no longer needed.

Mount an OSS bucket as a statically provisioned volume in the console

Step 1: Create a PV

  1. Log on to the ACK console and click Clusters in the left-side navigation pane.
  2. On the Clusters page, click the name of a cluster and choose Volumes > Persistent Volumes in the left-side navigation pane.
  3. In the upper-right corner of the Persistent Volumes page, click Create.
  4. In the Create PV dialog box, set the following parameters.
    ParameterDescription
    PV TypeYou can select Cloud Disk, NAS, or OSS. In this example, OSS is selected.
    Volume NameThe name of the PV that you want to create. The name must be unique in the cluster. In this example, pv-oss is entered.
    Volume Plug-inYou can select Flexvolume or CSI. In this example, CSI is selected.
    Note FlexVolume is deprecated and is not supported by clusters that run Kubernetes 1.20 or later.
    CapacityThe capacity of the PV.
    Access modeDefault value: ReadWriteMany.
    Access CertificateSelect a Secret that is used to access the OSS bucket.
    • Select Existing Secret: Select a namespace and a Secret.
    • Create Secret: Set Namespace, Name, AccessKey ID, and AccessKey Secret.
    Optional ParametersYou can enter custom parameters in the format of -o *** -o ***.
    Bucket IDThe name of the OSS bucket that you want to mount. Click Select Bucket. In the dialog box that appears, find the OSS bucket that you want to mount and click Select.
    EndpointSelect the endpoint of the OSS bucket:
    • If the OSS bucket and the ECS instance belong to different regions, select Public Endpoint.
    • If the OSS bucket and the ECS instance belong to the same region, select Internal Endpoint.
    LabelAdd labels to the PV.
  5. Click Create.

Step 2: Create a PVC

  1. Log on to the ACK console and click Clusters in the left-side navigation pane.
  2. On the Clusters page, click the name of a cluster and choose Volumes > Persistent Volume Claims in the left-side navigation pane.
  3. In the upper-right corner of the Persistent Volume Claims page, click Create.
  4. In the Create PVC dialog box, set the following parameters.
    ParameterDescription
    PVC TypeYou can select Cloud Disk, NAS, or OSS. In this example, OSS is selected.
    NameThe name of the PVC. The name must be unique in the cluster.
    Allocation ModeIn this example, Existing Volumes is selected.
    Note If no PV is created, you can set Allocation Mode to Create Volume and configure the required parameters to create a PV. For more information, see Step 1: Create a PV.
    Existing VolumesClick Select PV. Find the PV that you want to use and click Select in the Actions column.
    CapacityThe capacity of the PV that you created.
    Note The capacity of the PV that you created cannot exceed the capacity of the OSS bucket that is associated with the PV.
  5. Click Create.
    After the PVC is created, you can find the PVC named csi-oss-pvc in the PVCs list. The PV is bound to the PVC.

Step 3: Deploy an application

  1. Log on to the ACK console and click Clusters in the left-side navigation pane.
  2. On the Clusters page, click the name of a cluster and choose Workloads > Deployments in the left-side navigation pane.
  3. In the upper-right corner of the Deployments page, click Create from Image.
  4. Configure the application parameters.
    This example shows how to configure the volume parameters. For more information about other parameters, see Create a stateless application by using a Deployment.
    You can add local volumes and cloud volumes.
    • Add Local Storage: You can select HostPath, ConfigMap, Secret, or EmptyDir from the PV Type drop-down list. Then, set the Mount Source and Container Path parameters to mount the volume to a container path. For more information, see Volumes.
    • Add PVC: You can add cloud volumes.
    In this example, an OSS volume is selected and mounted to the /tmp path in the container. Volumes
  5. Set other parameters and click Create.
    After the application is created, you can use the OSS volume to store application data.

Mount an OSS bucket as a statically provisioned volume by using kubectl

Step 1: Create a statically provisioned PV and a PVC

You can create a statically provisioned PV and a PVC by using the following methods:

  • Method 1: Create a statically provisioned PV and a PVC by using a Secret

    Use a Secret to provide your AccessKey pair to the CSI plug-in.

  • Method 2: Specify an AccessKey pair when you create a PV and a PVC

    Specify an AccessKey pair in the PV configurations.

  • Method 3: Configure token-based authentication when you create a PV and a PVC

    Configure token-based authentication in the PV configurations.

Method 1: Create a statically provisioned PV and a PVC by using a Secret

  1. Create a Secret.

    The following YAML template provides an example on how to specify your AccessKey pair in a Secret:

    apiVersion: v1
    kind: Secret
    metadata:
      name: oss-secret
      namespace: default
    stringData:
      akId: <yourAccessKey ID>
      akSecret: <yourAccessKey Secret>
    Note The Secret must be created in the namespace where the application that uses the PV is deployed.

    Replace the values of akId and akSecret with your AccessKey ID and AccessKey secret.

  2. Run the following command to create a statically provisioned PV:
    kubectl create -f pv-oss.yaml

    The following pv-oss.yaml file is used to create the statically provisioned PV:

    apiVersion: v1
    kind: PersistentVolume
    metadata:
      name: pv-oss
      labels:
        alicloud-pvname: pv-oss
    spec:
      capacity:
        storage: 5Gi
      accessModes:
        - ReadWriteMany
      persistentVolumeReclaimPolicy: Retain
      csi:
        driver: ossplugin.csi.alibabacloud.com
        volumeHandle: pv-oss # The specified value must be the same as the name of the PV. 
        nodePublishSecretRef:
          name: oss-secret
          namespace: default
        volumeAttributes:
          bucket: "oss"
          url: "oss-cn-hangzhou.aliyuncs.com"
          otherOpts: "-o max_stat_cache_size=0 -o allow_other"
          path: "/"
    ParameterDescription
    nameThe name of the PV.
    labelsThe labels that are added to the PV.
    storageThe available storage of the OSS bucket.
    accessModesThe access mode of the PV.
    persistentVolumeReclaimPolicyThe reclaim policy of the PV.
    driverThe type of driver. In this example, the parameter is set to ossplugin.csi.alibabacloud.com. This indicates that the OSS CSI plug-in is used.
    nodePublishSecretRefThe Secret from which the AccessKey pair is retrieved when an OSS bucket is mounted as a PV.
    volumeHandleThe name of the PV.
    bucketThe OSS bucket that you want to mount.
    urlThe endpoint of the OSS bucket to be mounted.
    • If the node and the OSS bucket belong to the same region, use the internal endpoint of the OSS bucket.
    • If the node and the OSS bucket belong to different regions, use the public endpoint of the OSS bucket.
    • You cannot use a virtual private cloud (VPC) endpoint.

    Endpoint formats:

    • Internal endpoint: oss-{{regionName}}-internal.aliyuncs.com
    • Public endpoint: oss-{{regionName}}.aliyuncs.com
    otherOptsYou can enter custom parameters in the format of -o *** -o ***.
    pathThe path relative to the root directory of the OSS bucket to be mounted. The default value is /. This parameter is supported by csi-plugin 1.14.8.32-c77e277b-aliyun and later.
    1. Log on to the ACK console.
    2. In the left-side navigation pane of the ACK console, click Clusters.
    3. On the Clusters page, find the cluster that you want to manage. Then, click the name of the cluster or click Details in the Actions column.
    4. In the left-side navigation pane of the details page, choose Volumes > Persistent Volumes. You can find the created PV on the Persistent Volumes page.
  3. Run the following command to create a PVC:
    kubectl create -f pvc-oss.yaml

    The following pvc-oss.yaml file is used to create the PVC:

    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
      name: pvc-oss
    spec:
      accessModes:
        - ReadWriteMany
      resources:
        requests:
          storage: 5Gi
      selector:
        matchLabels:
          alicloud-pvname: pv-oss
    ParameterDescription
    nameThe name of the PVC.
    accessModesThe access mode of the PVC.
    storageThe capacity claimed by the PVC. The claimed capacity cannot exceed the capacity of the PV that is bound to the PVC.
    alicloud-pvnameThe labels that are used to select and bind a PV to the PVC. The labels must be the same as those of the PV to be bound to the PVC.
    In the left-side navigation pane, choose Volumes > Persistent Volume Claims. On the Persistent Volume Claims page, you can find the created PVC.

Method 2: Specify an AccessKey pair when you create a PV and a PVC

Method 1: Create a statically provisioned PV and a PVC by using a Secret describes how to use a Secret to provide your AccessKey pair to the CSI plug-in. You can also specify an AccessKey pair when you create a PV.

kubectl create -f pv-accesskey.yaml

The following pv-accesskey.yaml file provides an example on how to specify an AccessKey pair in the PV configurations:

apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv-oss
  labels:
    alicloud-pvname: pv-oss
spec:
  capacity:
    storage: 5Gi
  accessModes:
    - ReadWriteOnce
  persistentVolumeReclaimPolicy: Retain
  csi:
    driver: ossplugin.csi.alibabacloud.com
    volumeHandle: pv-oss # The specified value must be the same as the name of the PV. 
    volumeAttributes:
      bucket: "oss"
      url: "oss-cn-hangzhou.aliyuncs.com"
      otherOpts: "-o max_stat_cache_size=0 -o allow_other"
      akId: "***"
      akSecret: "***"

Method 3: Configure token-based authentication when you create a PV and a PVC

In addition to Method 1: Create a statically provisioned PV and a PVC by using a Secret and Method 2: Specify an AccessKey pair when you create a PV and a PVC,

you can also run the following command to configure token-based authentication:

kubectl create -f pv-sts.yaml

The following pv-sts.yaml file provides an example on how to configure token-based authentication:

apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv-oss
  labels:
    alicloud-pvname: pv-oss
spec:
  capacity:
    storage: 5Gi
  accessModes:
    - ReadWriteOnce
  persistentVolumeReclaimPolicy: Retain
  csi:
    driver: ossplugin.csi.alibabacloud.com
    volumeHandle: pv-oss # The specified value must be the same as the name of the PV. 
    volumeAttributes:
      bucket: "oss"
      url: "oss-cn-hangzhou.aliyuncs.com"
      otherOpts: "-o max_stat_cache_size=0 -o allow_other"
      authType: "sts"

Step 2: Create an application

Create an application named oss-static and mount the PVC to the application.

Run the following command to create an oss-static.yaml file:

kubectl create -f oss-static.yaml

The following oss-static.yaml file is used to create the application:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: oss-static
  labels:
    app: nginx
spec:
  replicas: 1
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx
        ports:
        - containerPort: 80
        volumeMounts:
          - name: pvc-oss
            mountPath: "/data"
          - name: pvc-oss
            mountPath: "/data1"
        livenessProbe:
          exec:
            command:
            - sh
            - -c
            - cd /data
          initialDelaySeconds: 30
          periodSeconds: 30
      volumes:
        - name: pvc-oss
          persistentVolumeClaim:
            claimName: pvc-oss
  • livenessProbe: Configure health checks. For more information, see OSS volume overview.
  • mountPath: the path where the OSS bucket is mounted in the container.
  • claimName: the name of the PVC, which is used to associate the application with the PVC.

Verify that data can be persisted in the OSS bucket

  1. View the pod that runs the oss-static application and the files in the OSS bucket.
    1. Run the following command to query the pod that runs the oss-static application:
      kubectl get pod

      Expected output:

      NAME                             READY   STATUS    RESTARTS   AGE
      oss-static-66fbb85b67-d****      1/1     Running   0          1h
    2. Run the following command to query files in the /data path:
       kubectl exec oss-static-66fbb85b67-d**** -- ls /data | grep tmpfile
      No output is returned. This indicates that no file is stored in the /data path.
  2. Run the following command to create a file named static in the /data path:
    kubectl exec oss-static-66fbb85b67-d**** -- touch /data/tmpfile
  3. Run the following command to query files in the /data path:
    kubectl exec oss-static-66fbb85b67-d**** -- ls /data | grep tmpfile

    Expected output:

    tmpfile
  4. Run the following command to delete the oss-static-66fbb85b67-d**** pod:
    kubectl delete pod oss-static-66fbb85b67-d****

    Expected output:

    pod "oss-static-66fbb85b67-d****" deleted
  5. Open another kubectl CLI and run the following command to view how the pod is deleted and recreated:
    kubectl get pod -w -l app=nginx

    Expected output:

    NAME                            READY   STATUS            RESTARTS   AGE
    nginx-static-78c7dcb9d7-g****   2/2     Running           0          50s
    nginx-static-78c7dcb9d7-g****   2/2     Terminating       0          72s
    nginx-static-78c7dcb9d7-h****   0/2     Pending           0          0s
    nginx-static-78c7dcb9d7-h****   0/2     Pending           0          0s
    nginx-static-78c7dcb9d7-h****   0/2     Init:0/1          0          0s
    nginx-static-78c7dcb9d7-g****   0/2     Terminating       0          73s
    nginx-static-78c7dcb9d7-h****   0/2     Init:0/1          0          5s
    nginx-static-78c7dcb9d7-g****   0/2     Terminating       0          78s
    nginx-static-78c7dcb9d7-g****   0/2     Terminating       0          78s
    nginx-static-78c7dcb9d7-h****   0/2     PodInitializing   0          6s
    nginx-static-78c7dcb9d7-h****   2/2     Running           0          8s
  6. Verify that the file still exists after the pod is deleted.
    1. Run the following command to query the pod that is recreated:
      kubectl get pod

      Expected output:

      NAME                             READY   STATUS    RESTARTS   AGE
      oss-static-66fbb85b67-z****      1/1     Running   0          40s
    2. Run the following command to query files in the /data path:
      kubectl exec oss-static-66fbb85b67-z**** -- ls /data | grep tmpfile

      Expected output:

      tmpfile
      The temfile file still exists in the /data path. This indicates that data is persisted to the OSS bucket.