All Products
Search
Document Center

Container Service for Kubernetes:Mount a statically provisioned OSS volume

Last Updated:Oct 31, 2023

Object Storage Service (OSS) is a secure, cost-effective, and highly-reliable cloud storage service provided by Alibaba Cloud. OSS allows you to store a large amount of data in the cloud. This topic describes how to mount an OSS bucket as a statically provisioned volume in the console or by using a CLI.

Prerequisites

Background information

OSS is a secure, cost-effective, high-capacity, and highly-reliable cloud storage service provided by Alibaba Cloud. You can mount an OSS bucket to multiple pods of an ACK cluster. OSS is applicable to the following scenarios:

  • Average requirements for disk I/O.

  • Sharing of data, including configuration files, images, and small video files.

Usage notes

  • OSS buckets do not support dynamically provisioned persistent volumes (PVs). We recommend that you do not use OSS buckets across accounts.

  • kubelet and the OSSFS driver may be restarted when the ACK cluster is updated. As a result, the mounted OSS directory becomes unavailable. In this case, you must recreate the pods to which the OSS volume is mounted. You can add health check settings in the YAML file to restart the pods and remount the OSS volume when the OSS directory becomes unavailable.

    Note

    If the csi-plugin and csi-provisioner that you use are V1.18.8.45 or later, the preceding issue does not occur.

  • If the securityContext.fsgroup parameter is set in the application template, the kubelet performs the chmod or chown operation after the volume is mounted, which increases the time consumption.

    Note

    For more information about how to speed up the mounting process when the securityContext.fsgroup parameter is set, see Why does it require a long period of time to mount an OSS volume?

  • When you use OSSFS to perform List operations, HTTP requests are sent to OSS to retrieve the metadata of the requested files. If the listed directory contains large numbers of files, such as more than 1,000 files (the actual number depends on the memory of the node), OSSFS will occupy large amounts of system memory. As a result, Out of Memory (OOM) errors may occur in pods. To resolve this problem, divide the directory or mount a subdirectory in the OSS bucket.

  • OSSFS is applicable for concurrent read operations. We recommend that you set the access mode of the OSS volume to ReadOnlyMany. To handle write operations, we recommend that you use the OSS SDK or ossutil to split reads and writes.

    Note

    When you use OSSFS to write data to an OSS volume whose access mode is ReadWriteMany, take note of the following items:

    • OSSFS cannot guarantee the consistency of data written by concurrent write operations.

    • When the OSS volume is mounted to a pod, if you log on to the pod or the host of the pod and delete or modify a file in the mounted path, the source file in the OSS bucket is also deleted or modified. To avoid accidentally deleting important data, you can enable version control for the OSS bucket. For more information, see Overview.

  • If you want to upload a file larger than 10 MB to OSS, you can split the file into multiple parts and separately upload the parts. To avoid incurring additional storage fees if a multipart upload task is interrupted, you can use one of the following methods to delete the parts that are no longer needed:

Procedure

Use the console or kubectl to mount a statically provisioned OSS volume.

Use the console to mount a statically provisioned OSS volume

Step 1: Create a RAM user that has OSS permissions and obtain the AccessKey pair of the RAM user

  1. Create a Resource Access Management (RAM) user. For more information, see Create a RAM user.

  2. Create the following custom policies to grant OSS permissions to the RAM user. For more information, see Create custom policies.

    Replace mybucket in the following policies with the name of your OSS bucket.

    • Policy that provides read-only permissions on OSS

      Click to view policy content

      {
          "Statement": [
              {
                  "Action": [
                      "oss:Get*",
                      "oss:List*"
                  ],
                  "Effect": "Allow",
                  "Resource": [
                      "acs:oss:*:*:mybucket",
                      "acs:oss:*:*:mybucket/*"
                  ],
              }
          ],
          "Version": "1"
      }
    • Policy that provides read and write permissions on OSS

      Click to view policy content

      {
          "Statement": [
              {
                  "Action": "oss:*",
                  "Effect": "Allow",
                  "Resource": [
                      "acs:oss:*:*:mybucket",
                      "acs:oss:*:*:mybucket/*"
                  ],
              }
          ],
          "Version": "1"
      }
  3. Optional. If the objects in the OSS bucket are encrypted by using a customer master key (CMK) ID in Key Management Service (KMS), you need to grant KMS permissions to the RAM user. For more information, see Encrypt an OSS volume.

  4. Grant OSS permissions to the RAM user. For more information, see Grant permissions to RAM users.

  5. Create an AccessKey pair for the RAM user. For more information, see Create an AccessKey pair.

Step 2: Create a PV

  1. Log on to the ACK console. In the left-side navigation pane, click Clusters.

  2. On the Clusters page, click the name of the cluster that you want to manage and choose Volumes > Persistent Volumes in the left-side navigation pane.

  3. In the upper-right corner of the Persistent Volumes page, click Create.

  4. In the Create PV dialog box, configure the following parameters.

    Parameter

    Description

    PV Type

    You can select Cloud Disk, NAS, or OSS. In this example, OSS is selected.

    Volume Name

    The name of the PV that you want to create. The name must be unique in the cluster. In this example, pv-oss is entered.

    Volume Plug-in

    You can select Flexvolume or CSI. In this example, CSI is selected.

    Note

    FlexVolume is deprecated and is not supported by clusters that run Kubernetes 1.20 or later.

    Capacity

    The capacity of the PV.

    Access Mode

    You can select ReadOnlyMany or ReadWriteMany. The default is ReadOnlyMany.

    If you select ReadOnlyMany, OSSFS mounts the OSS bucket in read-only mode.

    Access Certificate

    Select a Secret that is used to access the OSS bucket. In this example, the AccessKey ID and AccessKey secret that are obtained in Step 1 are used.

    • Select Existing Secret: You need to specify Namespace and Secret.

    • Create Secret: You need to specify Namespace, Name, AccessKey ID, and AccessKey Secret.

    Optional Parameters

    You can configure custom parameters in the -o *** -o *** format for the OSS volume, such as -o umask=022 -o max_stat_cache_size=0 -o allow_other.

    umask: modifies the permission mask of files in OSSFS. For example, if you set umask=022, the permission mask of files in OSSFS changes to 755. By default, the permission mask of files uploaded by using the OSS SDK or console is 640 in OSSFS. Therefore, we recommend that you use the umask command when you want to split reads and writes.

    max_stat_cache_size: specifies the maximum number of objects whose metadata can be cached. Metadata caching can accelerate List operations. However, if you modify files by using methods other than OSSFS, such as the OSS console, SDK, or ossutil, the metadata of the files may not be updated in real time.

    allow_other: allows other users to access the mounted directory. However, these users cannot access the files in the directory.

    For more information about other parameters, see Common options.

    Bucket ID

    The name of the OSS bucket that you want to mount. Click Select Bucket. In the dialog box that appears, find the OSS bucket that you want to mount and click Select.

    Endpoint

    Select the endpoint of the OSS bucket:

    • If the OSS bucket and the ECS instance belong to different regions, select Public Endpoint.

    • If the OSS bucket and the ECS instance belong to the same region, select Internal Endpoint.

    Label

    Add labels to the PV.

  5. After you complete the configuration, click Create.

Step 3: Create a PVC

  1. Log on to the ACK console. In the left-side navigation pane, click Clusters.

  2. On the Clusters page, click the name of the cluster that you want to manage and choose Volumes > Persistent Volume Claims in the left-side navigation pane.

  3. In the upper-right corner of the Persistent Volume Claims page, click Create.

  4. In the Create PVC dialog box, set the following parameters.

    Parameter

    Description

    PVC Type

    You can select Cloud Disk, NAS, or OSS. In this example, OSS is selected.

    Name

    The name of the PVC. The name must be unique in the cluster.

    Allocation Mode

    In this example, Existing Volumes is selected.

    Note

    If no PV is created, you can set Allocation Mode to Create Volume and configure the required parameters to create a PV. For more information, see Step 2: Create a PV.

    Existing Volumes

    Click Select PV. Find the PV that you want to use and click Select in the Actions column.

    Capacity

    The capacity of the PV.

    Note

    The capacity of the PV that you created cannot exceed the capacity of the OSS bucket that is associated with the PV.

  5. Click Create.

    After the PVC is created, you can find the PVC named csi-oss-pvc in the PVCs list. The PV is bound to the PVC.

Step 4: Create an application

  1. Log on to the ACK console. In the left-side navigation pane, click Clusters.

  2. On the Clusters page, click the name of the cluster that you want to manage and choose Workloads > Deployments in the left-side navigation pane.

  3. In the upper-right corner of the Deployments page, click Create from Image.

  4. Configure application parameters.

    This example shows how to configure the volume parameters. For more information about other parameters, see Create a stateless application by using a Deployment.

    You can add local volumes and cloud volumes.

    • Add Local Storage: You can select HostPath, ConfigMap, Secret, or EmptyDir from the PV Type drop-down list. Then, set the Mount Source and Container Path parameters to mount the volume to a container path. For more information, see Volumes.

    • Add PVC: You can add cloud volumes.

    In this example, an OSS volume is selected and mounted to the /tmp path in the container. 数据卷

  5. Set other parameters and click Create.

    After the application is created, you can use the volume to store application data.

Use kubectl to mount a statically provisioned OSS volume

Step 1: Create a RAM user that has OSS permissions and obtain the AccessKey pair of the RAM user

  1. Create a Resource Access Management (RAM) user. For more information, see Create a RAM user.

  2. Create the following custom policies to grant OSS permissions to the RAM user. For more information, see Create custom policies.

    Replace mybucket in the following policies with the name of your OSS bucket.

    • Policy that provides read-only permissions on OSS

      Click to view policy content

      {
          "Statement": [
              {
                  "Action": [
                      "oss:Get*",
                      "oss:List*"
                  ],
                  "Effect": "Allow",
                  "Resource": [
                      "acs:oss:*:*:mybucket",
                      "acs:oss:*:*:mybucket/*"
                  ],
              }
          ],
          "Version": "1"
      }
    • Policy that provides read and write permissions on OSS

      Click to view policy content

      {
          "Statement": [
              {
                  "Action": "oss:*",
                  "Effect": "Allow",
                  "Resource": [
                      "acs:oss:*:*:mybucket",
                      "acs:oss:*:*:mybucket/*"
                  ],
              }
          ],
          "Version": "1"
      }
  3. Optional. If the objects in the OSS bucket are encrypted by using a customer master key (CMK) ID in Key Management Service (KMS), you need to grant KMS permissions to the RAM user. For more information, see Encrypt an OSS volume.

  4. Grant OSS permissions to the RAM user. For more information, see Grant permissions to RAM users.

  5. Create an AccessKey pair for the RAM user. For more information, see Create an AccessKey pair.

Step 2: Create a statically provisioned PV and a PVC

You can use the following methods to create a statically provisioned PV and a PVC.

  • Method 1: Use a Secret

    Use a Secret to provide your AccessKey pair to the CSI plug-in.

  • Method 2: Specify an AccessKey pair when you create a PV and a PVC

    Specify an AccessKey pair in the PV configuration.

Method 1: Use a Secret

  1. Create a Secret.

    The following YAML template provides an example on how to specify your AccessKey pair in a Secret:

    apiVersion: v1
    kind: Secret
    metadata:
      name: oss-secret
      namespace: default
    stringData:
      akId: <yourAccessKey ID>
      akSecret: <yourAccessKey Secret>
    Note

    The Secret must be created in the namespace where the application that uses the PV is deployed.

    In this example, akId and akSecret are replaced with the AccessKey ID and AccessKey secret obtained in Step 1.

  2. Run the following command to create a statically provisioned PV:

    kubectl create -f pv-oss.yaml

    The following pv-oss.yaml file is used to create the statically provisioned PV:

    apiVersion: v1
    kind: PersistentVolume
    metadata:
      name: pv-oss
      labels:
        alicloud-pvname: pv-oss
    spec:
      capacity:
        storage: 5Gi
      accessModes:
        - ReadOnlyMany
      persistentVolumeReclaimPolicy: Retain
      csi:
        driver: ossplugin.csi.alibabacloud.com
        volumeHandle: pv-oss # The specified value must be the same as the name of the PV. 
        nodePublishSecretRef:
          name: oss-secret
          namespace: default
        volumeAttributes:
          bucket: "oss"
          url: "oss-cn-hangzhou.aliyuncs.com"
          otherOpts: "-o umask=022 -o max_stat_cache_size=0 -o allow_other"
          path: "/"

    Parameter

    Description

    name

    The name of the PV.

    labels

    The labels that are added to the PV.

    storage

    The available storage of the OSS bucket.

    accessModes

    The access mode. Valid values: ReadOnlyMany and ReadWriteMany.

    If you select ReadOnlyMany, OSSFS mounts the OSS bucket in read-only mode.

    persistentVolumeReclaimPolicy

    The reclaim policy of the PV.

    driver

    The type of driver. In this example, the parameter is set to ossplugin.csi.alibabacloud.com. This indicates that the OSS CSI plug-in is used.

    nodePublishSecretRef

    The Secret from which the AccessKey pair is retrieved when an OSS bucket is mounted as a PV.

    volumeHandle

    The name of the PV.

    bucket

    The OSS bucket that you want to mount.

    url

    The endpoint of the OSS bucket to be mounted.

    • If the node and the OSS bucket belong to the same region, use the internal endpoint of the OSS bucket.

    • If the node and the OSS bucket belong to different regions, use the public endpoint of the OSS bucket.

    • You cannot use a virtual private cloud (VPC) endpoint.

    Endpoint formats:

    • Internal endpoint: oss-{{regionName}}-internal.aliyuncs.com.

    • Public endpoint: oss-{{regionName}}.aliyuncs.com.

    otherOpts

    Enter custom parameters in the -o *** -o *** format for the OSS volume, such as -o umask=022 -o max_stat_cache_size=0 -o allow_other.

    umask: modifies the permission mask of files in OSSFS. For example, if you set umask=022, the permission mask of files in OSSFS changes to 755. By default, the permission mask of files uploaded by using the OSS SDK or console is 640 in OSSFS. Therefore, we recommend that you use the umask command when you want to split reads and writes.

    max_stat_cache_size: specifies the maximum number of objects whose metadata can be cached. Metadata caching can accelerate List operations. However, if you modify files by using methods other than OSSFS, such as the OSS console, SDK, or ossutil, the metadata of the files may not be updated in real time.

    allow_other: allows other users to access the mounted directory. However, these users cannot access the files in the directory.

    For more information about other parameters, see Common options.

    path

    The path relative to the root directory of the OSS bucket to be mounted. The default value is /. This parameter is supported by csi-plugin 1.14.8.32-c77e277b-aliyun and later.

    1. Log on to the ACK console. In the left-side navigation pane, click Clusters.

    2. On the Clusters page, click the name of the cluster that you want to manage and choose Volumes > Persistent Volumes in the left-side navigation pane.

      On the Persistent Volumes page, you can find the PV that you created.

  3. Run the following command to create a PVC:

    kubectl create -f pvc-oss.yaml

    The following pvc-oss.yaml file is used to create the PVC:

    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
      name: pvc-oss
    spec:
      accessModes:
        - ReadOnlyMany
      resources:
        requests:
          storage: 5Gi
      selector:
        matchLabels:
          alicloud-pvname: pv-oss

    Parameter

    Description

    name

    The name of the PVC.

    accessModes

    The access mode. Valid values: ReadOnlyMany and ReadWriteMany.

    If you select ReadOnlyMany, OSSFS mounts the OSS bucket in read-only mode.

    storage

    The capacity claimed by the PVC. The claimed capacity cannot exceed the capacity of the PV bound to the PVC.

    alicloud-pvname

    The labels that are used to select and bind a PV to the PVC. The labels must be the same as those of the PV to be bound to the PVC.

    In the left-side navigation pane, choose Volumes > Persistent Volume Claims. On the Persistent Volume Claims page, you can find the created PVC.

Method 2: Specify an AccessKey pair when you create a PV and a PVC

  1. You can also run the following command to specify an AccessKey pair in the PV configuration:

    kubectl create -f pv-accesskey.yaml

    The following pv-accesskey.yaml file provides an example on how to specify an AccessKey pair in the PV configuration:

    apiVersion: v1
    kind: PersistentVolume
    metadata:
      name: pv-oss
      labels:
        alicloud-pvname: pv-oss
    spec:
      capacity:
        storage: 5Gi
      accessModes:
        - ReadOnlyMany
      persistentVolumeReclaimPolicy: Retain
      csi:
        driver: ossplugin.csi.alibabacloud.com
        volumeHandle: pv-oss # The specified value must be the same as the name of the PV. 
        volumeAttributes:
          bucket: "oss"
          url: "oss-cn-hangzhou.aliyuncs.com"
          otherOpts: "-o umask=022 -o max_stat_cache_size=0 -o allow_other"
          akId: "***"
          akSecret: "***"

    Parameter

    Description

    name

    The name of the PV.

    labels

    The labels that are added to the PV.

    storage

    The available storage of the OSS bucket.

    accessModes

    The access mode. Valid values: ReadOnlyMany and ReadWriteMany.

    If you select ReadOnlyMany, OSSFS mounts the OSS bucket in read-only mode.

    persistentVolumeReclaimPolicy

    The reclaim policy of the PV.

    driver

    The type of driver. In this example, the parameter is set to ossplugin.csi.alibabacloud.com. This indicates that the OSS CSI plug-in is used.

    volumeHandle

    The name of the PV.

    bucket

    The OSS bucket that you want to mount.

    url

    The endpoint of the OSS bucket to be mounted.

    • If the node and the OSS bucket belong to the same region, use the internal endpoint of the OSS bucket.

    • If the node and the OSS bucket belong to different regions, use the public endpoint of the OSS bucket.

    • You cannot use a virtual private cloud (VPC) endpoint.

    Endpoint formats:

    • Internal endpoint: oss-{{regionName}}-internal.aliyuncs.com.

    • Public endpoint: oss-{{regionName}}.aliyuncs.com.

    otherOpts

    Enter custom parameters in the -o *** -o *** format for the OSS volume, such as -o max_stat_cache_size=0 -o allow_other.

    max_stat_cache_size: specifies the maximum number of objects whose metadata can be cached. Metadata caching can accelerate List operations. However, if you modify files by using methods other than OSSFS, such as the OSS console, SDK, or ossutil, the metadata of the files may not be updated in real time.

    allow_other: allows other users to access the mounted directory. However, these users cannot access the files in the directory.

    For more information about other parameters, see Common options.

    path

    The path relative to the root directory of the OSS bucket to be mounted. The default value is /. This parameter is supported by csi-plugin 1.14.8.32-c77e277b-aliyun and later.

    akId

    Specify the AccessKey ID obtained in Step 1.

    akSecret

    Specify the AccessKey secret obtained in Step 1.

  2. Run the following command to create a PVC:

    kubectl create -f pvc-oss.yaml

    The following pvc-oss.yaml file is used to create the PVC:

    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
      name: pvc-oss
    spec:
      accessModes:
        - ReadOnlyMany
      resources:
        requests:
          storage: 5Gi
      selector:
        matchLabels:
          alicloud-pvname: pv-oss

    Parameter

    Description

    name

    The name of the PVC.

    accessModes

    The access mode. Valid values: ReadOnlyMany and ReadWriteMany.

    If you select ReadOnlyMany, OSSFS mounts the OSS bucket in read-only mode.

    storage

    The capacity claimed by the PVC. The claimed capacity cannot exceed the capacity of the PV bound to the PVC.

    alicloud-pvname

    The labels that are used to select and bind a PV to the PVC. The labels must be the same as those of the PV to be bound to the PVC.

    In the left-side navigation pane, choose Volumes > Persistent Volume Claims. On the Persistent Volume Claims page, you can find the created PVC.

Step 3: Deploy an application

Create an application named oss-static and mount the PV to the application.

Run the following command to create a file named oss-static.yaml:

kubectl create -f oss-static.yaml

The following oss-static.yaml file is used to create the application:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: oss-static
  labels:
    app: nginx
spec:
  replicas: 2
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx
        ports:
        - containerPort: 80
        volumeMounts:
          - name: pvc-oss
            mountPath: "/data"
          - name: pvc-oss
            mountPath: "/data1"
        livenessProbe:
          exec:
            command:
            - sh
            - -c
            - cd /data
          initialDelaySeconds: 30
          periodSeconds: 30
      volumes:
        - name: pvc-oss
          persistentVolumeClaim:
            claimName: pvc-oss
  • livenessProbe: Configure health checks. For more information, see OSS volume overview.

  • mountPath: the path where the OSS bucket is mounted in the container.

  • claimName: the name of the PVC that the application uses to mount the NAS file system.

Check whether the OSS volume can persist and share data

  1. View the pod that runs the oss-static application and the files in the OSS bucket.

    1. Run the following command to query the pod that runs the oss-static application:

      kubectl get pod

      Expected output:

      NAME                             READY   STATUS    RESTARTS   AGE
      oss-static-66fbb85b67-d****      1/1     Running   0          1h
      oss-static-66fbb85b67-l****      1/1     Running   0          1h
    2. Create a tmpfile file.

      • If the OSS volume is mounted in ReadWriteMany mode, run the following command to create a tmpfile in the /data path:

        kubectl exec oss-static-66fbb85b67-d**** -- touch /data/tmpfile
        kubectl exec oss-static-66fbb85b67-l**** -- touch /data/tmpfile
      • If the OSS volume is mounted in ReadOnlyMany mode, use the OSS console or cp command (upload objects) to upload a tmpfile file to the corresponding path in the OSS bucket.

  2. Run the following command to query files in the /data path of the oss-static-66fbb85b67-d**** pod and query files in the /data1 path of the oss-static-66fbb85b67-l**** pod:

    kubectl exec oss-static-66fbb85b67-d**** -- ls /data | grep tmpfile
    kubectl exec oss-static-66fbb85b67-l**** -- ls /data1 | grep tmpfile

    Expected output:

    tmpfile

    The output indicates that the file exists in both pods. This means that the pods share the data stored in the OSS volume.

    Note

    If no output is returned, check whether the version of the CSI plug-in is 1.20.7 or later. For more information, see csi-plugin.

  3. Run the following command to delete the oss-static-66fbb85b67-d**** pod:

    kubectl delete pod oss-static-66fbb85b67-d****

    Expected output:

    pod "oss-static-66fbb85b67-d****" deleted
  4. Verify that the file still exists after the pod is deleted.

    1. Run the following command to query the pod that is recreated:

      kubectl get pod

      Expected output:

      NAME                             READY   STATUS    RESTARTS   AGE
      oss-static-66fbb85b67-l****      1/1     Running   0          1h
      oss-static-66fbb85b67-z****      1/1     Running   0          40s

    2. Run the following command to query files in the /data path of the pod:

      kubectl exec oss-static-66fbb85b67-z**** -- ls /data | grep tmpfile

      Expected output:

      tmpfile

      The output indicates that the temfile still exists. This means that the OSS volume can persist data.