All Products
Search
Document Center

Container Service for Kubernetes:Mount a statically provisioned OSS volume

Last Updated:Dec 02, 2024

Object Storage Service (OSS) is a secure, cost-effective, and high-persistence cloud storage service provided by Alibaba Cloud. OSS allows you to store large amounts of data in the cloud. You can use RAM Roles for Service Accounts (RRSA) authentication or AccessKey authentication as a Resource Access Management (RAM) user to configure permissions to mount statically provisioned OSS volumes.

Prerequisites

Scenarios

OSS is a secure, cost-effective, high-capacity, and highly reliable cloud storage service provided by Alibaba Cloud. You can mount an OSS bucket to multiple pods of an ACK cluster. OSS is applicable to the following scenarios:

  • Low requirements for disk I/O.

  • Sharing of data such as configuration files, images, and short videos.

Usage notes

  • OSS buckets do not support dynamically provisioned persistent volumes (PVs). We recommend that you do not use OSS buckets across accounts.

  • kubelet is restarted when an ACK cluster is updated. As a result, ossfs is restarted and the mounted OSS directory becomes unavailable. In this case, you must recreate the pods to which the OSS volume is mounted. You can add health check settings in the YAML file to restart the pods and remount the OSS volume when the OSS directory becomes unavailable.

    Note

    If the csi-plugin and csi-provisioner components that you use are of V1.18.8.45 or later, the preceding issue does not occur.

  • If the securityContext.fsgroup parameter is specified in the application template, kubelet performs the chmod or chown operation after the volume is mounted, which increases the time consumption.

    Note

    For more information about how to speed up the mounting process when the securityContext.fsgroup parameter is specified, see the Why does it require a long period of time to mount an OSS volume? section of the "FAQ about OSS volumes" topic.

  • When you use ossfs to perform List operations, HTTP requests are sent to OSS to retrieve the metadata of the requested files. If a directory contains large numbers of files, such as more than 100,000 files, ossfs occupies large amounts of system memory. As a result, Out of Memory (OOM) errors may occur in pods. The actual number of files that may cause OOM errors is related to the node memory. To resolve this issue, you can divide the directory or mount a subdirectory in the OSS bucket.

  • ossfs is applicable to concurrent read scenarios. We recommend that you set the access modes of persistent volume claims (PVCs) and PVs to ReadOnlyMany in concurrent read scenarios. To handle write operations, we recommend that you use OSS SDKs or ossutil to split reads and writes. For more information, see Best practice for OSS read/write splitting.

    Note

    When you use ossfs to write data to an OSS volume whose access mode is ReadWriteMany, take note of the following items:

    • ossfs cannot guarantee the consistency of data written by concurrent write operations.

    • After the OSS volume is mounted to a pod, if you log on to the pod or the host of the pod and delete or modify a file in the mount path, the source file in the OSS bucket is also deleted or modified. To prevent accidental deletion of important data, you can enable version control for the OSS bucket. For more information, see Versioning.

  • If you want to upload a file larger than 10 MB in size to OSS, you can split the file into multiple parts and separately upload the parts. To prevent the parts from incurring additional storage fees if a multipart upload task is interrupted, you can use one of the following methods to delete the parts that are no longer needed:

Use RRSA authentication to mount a statically provisioned OSS volume

You can use the RRSA feature to enforce access control on different PVs that are deployed in an ACK cluster. This implements fine-grained API permission control on PVs and reduces security risks. For more information, see Use RRSA to authorize different pods to access different cloud services.

Note

The RRSA feature supports only ACK clusters that run Kubernetes 1.26 and later. ACK clusters that support the RRSA feature include ACK Basic clusters, ACK Pro clusters, ACK Serverless Basic clusters, and ACK Serverless Pro clusters. The version of the CSI component used by the cluster must be 1.30.4 or later. If you used the RRSA feature prior to version 1.30.4, you must attach policies to the RAM role. For more information, see [Product Changes] ossfs version upgrade and mounting process optimization in CSI.

(Optional) Step 1: Create a RAM role

If you use the RRSA feature for the first time in your cluster, perform the following steps. If you have used RRSA authentication to mount OSS volumes in the cluster, skip this step.

  1. Log on to the ACK console and enable RRSA. For more information, see Use RRSA to authorize different pods to access different cloud services.

  2. Create a RAM role for mounting OSS volumes by using RRSA authentication. The RAM role is assumed by your cluster when it uses RRSA to mount OSS volumes.

    When you create the RAM role, select IdP for the Select Trusted Entity parameter. In this example, the name of the RAM role is demo-role-for-rrsa.

    1. Log on to the RAM console with your Alibaba Cloud account.

    2. In the left-side navigation pane, choose Identities > Roles. On the Roles page, click Create Role.

    3. In the Create Role panel, select IdP for Select Trusted Entity and click Next.

    4. In the Configure Role step, configure the parameters and click OK.

      The following table describes the parameters that are configured in this example.

      Parameter

      Description

      RAM Role Name

      The name of the RAM role. In this example, demo-role-for-rrsa is used.

      Note

      Optional. The description of the RAM role.

      IdP Type

      The type of the identity provider (IdP). Select OIDC.

      Select IdP

      The IdP that you want to use. Select a value in the ack-rrsa-<cluster_id> format. <cluster_id> indicates the ID of your cluster.

      Conditions

      • oidc:iss: Use the default value.

      • oidc:aud: Select sts.aliyuncs.com.

      • oidc:sub: Set the condition operator to StringEquals. In this example, system:serviceaccount:ack-csi-fuse:csi-fuse-ossfs is used.

Step 2: Grant permissions to the demo-role-for-rrsa role

  1. Create the following custom policies to grant OSS permissions to the RAM user. For more information, see Create custom policies.

    Replace mybucket in the following policies with the name of your OSS bucket.

    • Policy that grants read-only permissions on OSS

      Policy document

      {
          "Statement": [
              {
                  "Action": [
                      "oss:Get*",
                      "oss:List*"
                  ],
                  "Effect": "Allow",
                  "Resource": [
                      "acs:oss:*:*:mybucket",
                      "acs:oss:*:*:mybucket/*"
                  ],
              }
          ],
          "Version": "1"
      }
    • Policy that grants read and write permissions on OSS

      Policy document

      {
          "Statement": [
              {
                  "Action": "oss:*",
                  "Effect": "Allow",
                  "Resource": [
                      "acs:oss:*:*:mybucket",
                      "acs:oss:*:*:mybucket/*"
                  ],
              }
          ],
          "Version": "1"
      }
  2. Optional. If the files in the OSS bucket are encrypted by using a customer master key (CMK) in Key Management Service (KMS), you need to grant KMS permissions to the RAM user. For more information, see the Encrypt an OSS volume section of the "Encrypt an OSS volume" topic.

  3. Grant the required permissions to the demo-role-for-rrsa role. For more information, see Grant permissions to a RAM role.

    Note

    If you want to use an existing RAM role, you can modify the trust policy of the RAM role to which the required OSS permissions are granted. For more information, see the Use an existing RAM role and grant the required permissions to the RAM role section of the "Use RRSA to authorize different pods to access different cloud services" topic.

Step 3: Create a PV and a PVC

  1. Create a PV that uses RRSA authentication.

    1. Use the following sample template to create a PV configuration file named pv-rrsa.yaml. In this file, RRSA authentication is enabled.

      apiVersion: v1
      kind: PersistentVolume
      metadata:
        name: pv-oss
        labels:    
          alicloud-pvname: pv-oss
      spec:
        capacity:
          storage: 5Gi
        accessModes:
          - ReadOnlyMany
        persistentVolumeReclaimPolicy: Retain
        csi:
          driver: ossplugin.csi.alibabacloud.com
          volumeHandle: pv-oss # Specify the name of the PV. 
          volumeAttributes:
            bucket: "oss"
            url: "oss-cn-hangzhou.aliyuncs.com"
            otherOpts: "-o umask=022 -o max_stat_cache_size=0 -o allow_other"
            authType: "rrsa"
            roleName: "demo-role-for-rrsa"

      Parameter

      Description

      name

      The name of the PV.

      labels

      The labels that are added to the PV.

      storage

      The available storage of the OSS bucket.

      accessModes

      The access mode. Valid values: ReadOnlyMany and ReadWriteMany.

      If you select ReadOnlyMany, ossfs mounts the OSS bucket in read-only mode.

      persistentVolumeReclaimPolicy

      The reclaim policy of the PV.

      driver

      The type of driver. In this example, the parameter is set to ossplugin.csi.alibabacloud.com. This indicates that the OSS CSI component is used.

      volumeHandle

      The name of the PV.

      bucket

      The OSS bucket that you want to mount.

      url

      The endpoint of the OSS bucket that you want to mount.

      • If the node and the OSS bucket reside in the same region, or are connected through a virtual private cloud (VPC), use an internal endpoint of the OSS bucket.

      • If the node and the OSS bucket reside in different regions, use a public endpoint of the OSS bucket.

      Endpoint formats:

      • Internal endpoint: http://oss-{{regionName}}-internal.aliyuncs.com or https://oss-{{regionName}}-internal.aliyuncs.com

      • Public endpoint: http://oss-{{regionName}}.aliyuncs.com or https://oss-{{regionName}}.aliyuncs.com.

      Important
      • The actual endpoint is displayed on the Overview page in the OSS console.

      • The internal access port for vpc100-oss-{{regionName}}.aliyuncs.com is deprecated. Make sure to update your configuration accordingly.

      otherOpts

      You can configure custom parameters in the -o *** -o *** format for the OSS volume. Example: -o umask=022 -o max_stat_cache_size=0 -o allow_other.

      umask: modifies the permission mask of files in ossfs. For example, if you specify umask=022, the permission mask of files in ossfs changes to 755. By default, the permission mask of files uploaded by using OSS SDKs or console is 640 in ossfs. Therefore, we recommend that you specify the umask parameter if you want to split reads and writes.

      max_stat_cache_size: the maximum number of files whose metadata can be cached. Metadata caching can accelerate ls operations. However, if you modify files by using methods such as OSS SDKs, console, and ossutil, the cached metadata of the files is not synchronously updated. As a result, the cached metadata becomes outdated, and the results of ls operations may be inaccurate.

      allow_other: allows other users to access the mounted directory. However, these users cannot access the files in the directory.

      For more information about other parameters, see Options supported by ossfs.

      path

      The path relative to the root directory of the OSS bucket to be mounted. Default value: /. This parameter is supported by csi-plugin of V1.14.8.32-c77e277b-aliyun and later.

      For ossfs versions earlier than 1.91, you must create this path in OSS in advance. For more information, see Features of ossfs 1.91 and later and ossfs performance benchmarking.

      authType

      The authentication method. Set this parameter to rrsa, which indicates that RRSA authentication is used.

      roleName

      The name of the RAM role that you want to use. Set this parameter to the name of the RAM role that you created or modified in Step 1. If you need to configure different permissions for different PVs, you can create multiple RAM roles and specify the RAM roles in the roleName parameter.

      Note

      For more information about how to use specified Alibaba Cloud Resource Names (ARNs) or ServiceAccounts in RRSA authentication, see the How do I use the specified ARNs or ServiceAccount in RRSA authentication? section of the "FAQ about OSS volumes" topic.

    2. Run the following command to create a PV that uses RRSA authentication:

      kubectl create -f pv-rrsa.yaml
  2. Run the following command to create a statically provisioned PVC:

    kubectl create -f pvc-oss.yaml

    The following pvc-oss.yaml sample file is used to create the statically provisioned PVC:

    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
      name: pvc-oss
    spec:
      accessModes:
        - ReadOnlyMany
      resources:
        requests:
          storage: 5Gi
      selector:
        matchLabels:
          alicloud-pvname: pv-oss

    Parameter

    Description

    name

    The name of the PVC.

    accessModes

    The access mode. Valid values: ReadOnlyMany and ReadWriteMany.

    If you select ReadOnlyMany, ossfs mounts the OSS bucket in read-only mode.

    storage

    The capacity claimed by the PVC. The claimed capacity cannot exceed the capacity of the PV that is bound to the PVC.

    alicloud-pvname

    The labels that are used to select and bind a PV to the PVC. The labels must be the same as those of the PV to be bound to the PVC.

    In the left-side navigation pane, choose Volumes > Persistent Volume Claims. On the Persistent Volume Claims page, you can find the created PVC.

Step 4: Create an application

Create an application named oss-static and mount the PV to the application.

Run the following command to create a file named oss-static.yaml:

kubectl create -f oss-static.yaml

The following oss-static.yaml sample file is used to create the application:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: oss-static
  labels:
    app: nginx
spec:
  replicas: 2
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx
        ports:
        - containerPort: 80
        volumeMounts:
          - name: pvc-oss
            mountPath: "/data"
          - name: pvc-oss
            mountPath: "/data1"
        livenessProbe:
          exec:
            command:
            - sh
            - -c
            - cd /data
          initialDelaySeconds: 30
          periodSeconds: 30
      volumes:
        - name: pvc-oss
          persistentVolumeClaim:
            claimName: pvc-oss
  • livenessProbe: the health check configurations. For more information, see OSS volumes.

  • mountPath: the path to which the OSS bucket is mounted in the container.

  • claimName: the name of the PVC that the application uses.

Use AccessKey authentication to mount a statically provisioned OSS volume as a RAM user

Use the ACK console

Step 1: Create a RAM user that has OSS permissions and obtain the AccessKey pair of the RAM user

  1. Create a RAM user. For more information, see Create a RAM user.

  2. Create the following custom policies to grant OSS permissions to the RAM user. For more information, see Create custom policies.

    Replace mybucket in the following policies with the name of your OSS bucket.

    • Policy that grants read-only permissions on OSS

      Policy document

      {
          "Statement": [
              {
                  "Action": [
                      "oss:Get*",
                      "oss:List*"
                  ],
                  "Effect": "Allow",
                  "Resource": [
                      "acs:oss:*:*:mybucket",
                      "acs:oss:*:*:mybucket/*"
                  ],
              }
          ],
          "Version": "1"
      }
    • Policy that grants read and write permissions on OSS

      Policy document

      {
          "Statement": [
              {
                  "Action": "oss:*",
                  "Effect": "Allow",
                  "Resource": [
                      "acs:oss:*:*:mybucket",
                      "acs:oss:*:*:mybucket/*"
                  ],
              }
          ],
          "Version": "1"
      }
  3. Optional. If the files in the OSS bucket are encrypted by using a customer master key (CMK) in Key Management Service (KMS), you need to grant KMS permissions to the RAM user. For more information, see the Encrypt an OSS volume section of the "Encrypt an OSS volume" topic.

  4. Grant OSS permissions to the RAM user. For more information, see Grant permissions to a RAM user.

  5. Create an AccessKey pair for the RAM user. For more information, see Create an AccessKey pair.

Important
  • If you use the ACK console to mount a statically provisioned OSS volume, only AccessKey authentication is supported. If the AccessKey pair referenced by the OSS volume becomes invalid or the permissions are revoked, the application to which the volume is mounted fails to access OSS and a permission error is reported. To resolve the issue, you need to modify the Secret that stores the AccessKey pair and mount the OSS volume to the application again. In this case, the application is restarted. For more information about how to remount an OSS volume by using ossfs after the AccessKey pair is revoked, see Scenario 4 in the "How do I manage the permissions related to OSS volume mounting?" section of the "FAQ about OSS volumes" topic.

  • If you need to regularly rotate AccessKey pairs, we recommend that you use RRSA authentication.

Step 2: Create a PV

  1. Log on to the ACK console. In the left-side navigation pane, click Clusters.

  2. On the Clusters page, find the cluster that you want to manage and click its name. In the left-side pane, choose Volumes > Persistent Volumes.

  3. In the upper-right corner of the Persistent Volumes page, click Create.

  4. In the Create PV dialog box, configure the parameters that are described in the following table.

    Parameter

    Description

    PV Type

    The type of the PV. Valid values: Cloud Disk, NAS, and OSS. In this example, OSS is selected.

    Volume Name

    The name of the PV. The name must be unique in the cluster. In this example, pv-oss is used.

    Capacity

    The capacity of the PV.

    Access mode

    The access mode of the PV. Valid values: ReadOnlyMany and ReadWriteMany. Default value: ReadOnlyMany.

    If you select ReadOnlyMany, ossfs mounts the OSS bucket in read-only mode.

    Access Certificate

    The Secret that is used to access the OSS bucket. In this example, the AccessKey pair that you obtained in Step 1 is used. Valid values:

    • Select Existing Secret: If you set the parameter to this value, you must also configure the Namespace and Secret parameters.

    • Create Secret: If you set the parameter to this value, you must also configure the Namespace, Name, AccessKey ID, and AccessKey Secret parameters.

    Optional Parameters

    You can configure custom parameters in the -o *** -o *** format for the OSS volume. Example: -o umask=022 -o max_stat_cache_size=0 -o allow_other.

    umask: modifies the permission mask of files in ossfs. For example, if you specify umask=022, the permission mask of files in ossfs changes to 755. By default, the permission mask of files uploaded by using OSS SDKs or console is 640 in ossfs. Therefore, we recommend that you specify the umask parameter if you want to split reads and writes.

    max_stat_cache_size: the maximum number of files whose metadata can be cached. Metadata caching can accelerate ls operations. However, if you modify files by using methods such as OSS SDKs, console, and ossutil, the cached metadata of the files is not synchronously updated. As a result, the cached metadata becomes outdated, and the results of ls operations may be inaccurate.

    allow_other: allows other users to access the mounted directory. However, these users cannot access the files in the directory.

    For more information about other parameters, see Options supported by ossfs.

    Bucket ID

    The name of the OSS bucket that you want to mount. Click Select Bucket. In the dialog box that appears, find the OSS bucket that you want to mount and click Select.

    Endpoint

    The endpoint of the OSS bucket that you want to mount.

    • If the OSS bucket and ECS instance reside in different regions, select Public Endpoint.

    • If the OSS bucket and ECS instance reside in the same region, select Internal Endpoint.

      Note

      By default, HTTP is used when you access the OSS bucket over an internal network. If you want to use HTTPS, use kubectl to create a statically provisioned PV.

    Label

    The labels that you want to add to the PV.

  5. After you configure the parameters, click Create.

Step 3: Create a PVC

  1. On the Clusters page, find the cluster that you want to manage and click its name. In the left-side pane, choose Volumes > Persistent Volume Claims.

  2. In the upper-right corner of the Persistent Volume Claims page, click Create.

  3. In the Create PVC dialog box, configure the parameters that are described in the following table.

    Parameter

    Description

    PVC Type

    The type of the PVC. Valid values: Cloud Disk, NAS, and OSS. In this example, OSS is selected.

    Name

    The name of the PVC. The name must be unique in the cluster.

    Allocation Mode

    The allocation mode of the PVC. In this example, Existing Volumes is selected.

    Note

    If no PV is created, set the Allocation Mode parameter to Create Volume and configure the required parameters to create a PV. For more information, see the Step 2: Create a PV section of this topic.

    Existing Volumes

    Click Select PV. Find the PV that you want to use and click Select in the Actions column.

    Capacity

    The capacity of the PV.

    Note

    The capacity of the PV cannot exceed the capacity of the OSS bucket that is associated with the PV.

  4. Click Create.

    After the PVC is created, you can find the PVC named csi-oss-pvc in the PVC list. The PV is bound to the PVC.

Step 4: Create an application

  1. On the Clusters page, find the cluster that you want to manage and click its name. In the left-side pane, choose Workloads > Deployments.

  2. In the upper-right corner of the Deployments page, click Create from Image.

  3. Configure application parameters.

    The following section describes how to configure the volume parameters. For more information about other parameters, see Create a stateless application by using a Deployment.

    ACK clusters support local volumes and cloud volumes.

    • Add Local Storage: Select a PV type from the PV Type drop-down list. Valid values: HostPath, ConfigMap, Secret, and EmptyDir. Configure the Mount Source and Container Path parameters to mount the volume to a container path. For more information, see Volumes.

    • Add PVC: Add cloud volumes.

    In this example, an OSS volume is selected and mounted to the /tmp path in the container. 数据卷

  4. Configure other parameters and click Create.

    After the application is created, you can use the volume to store application data.

Use kubectl

Step 1: Create a RAM user that has OSS permissions and obtain the AccessKey pair of the RAM user

  1. Create a RAM user. For more information, see Create a RAM user.

  2. Create the following custom policies to grant OSS permissions to the RAM user. For more information, see Create custom policies.

    Replace mybucket in the following policies with the name of your OSS bucket.

    • Policy that grants read-only permissions on OSS

      Policy document

      {
          "Statement": [
              {
                  "Action": [
                      "oss:Get*",
                      "oss:List*"
                  ],
                  "Effect": "Allow",
                  "Resource": [
                      "acs:oss:*:*:mybucket",
                      "acs:oss:*:*:mybucket/*"
                  ],
              }
          ],
          "Version": "1"
      }
    • Policy that grants read and write permissions on OSS

      Policy document

      {
          "Statement": [
              {
                  "Action": "oss:*",
                  "Effect": "Allow",
                  "Resource": [
                      "acs:oss:*:*:mybucket",
                      "acs:oss:*:*:mybucket/*"
                  ],
              }
          ],
          "Version": "1"
      }
  3. Optional. If the files in the OSS bucket are encrypted by using a customer master key (CMK) in Key Management Service (KMS), you need to grant KMS permissions to the RAM user. For more information, see the Encrypt an OSS volume section of the "Encrypt an OSS volume" topic.

  4. Grant OSS permissions to the RAM user. For more information, see Grant permissions to a RAM user.

  5. Create an AccessKey pair for the RAM user. For more information, see Create an AccessKey pair.

Step 2: Create a PV and a PVC

You can use one of the following methods to create a PV and a PVC:

  • Method 1: Use a Secret to create a PV and a PVC

    Use a Secret to provide your AccessKey pair to the CSI component.

    Important
    • If the AccessKey pair referenced by the OSS volume becomes invalid or the permissions are revoked, the application to which the volume is mounted fails to access OSS and a permission error is reported. To resolve the issue, you need to modify the Secret that stores the AccessKey pair and mount the OSS volume to the application again. In this case, the application is restarted. For more information about how to remount an OSS volume by using ossfs after the AccessKey pair is revoked, see Scenario 4 in the "How do I manage the permissions related to OSS volume mounting?" section of the "FAQ about OSS volumes" topic.

    • If you need to regularly rotate AccessKey pairs, we recommend that you use RRSA authentication.

  • Method 2: Specify an AccessKey pair when you create a PV and a PVC

    Specify an AccessKey pair in the PV configuration file.

    Important
    • If the AccessKey pair referenced by the OSS volume becomes invalid or the permissions are revoked, the application to which the volume is mounted fails to access OSS and a permission error is reported. You need to modify the AccessKey information in the PV configuration file and redeploy the application.

    • If you need to regularly rotate AccessKey pairs, we recommend that you use RRSA authentication.

Method 1: Use a Secret to create a PV and a PVC

  1. Create a Secret.

    The following sample YAML file is used to create the Secret that stores an AccessKey pair:

    apiVersion: v1
    kind: Secret
    metadata:
      name: oss-secret
      namespace: default
    stringData:
      akId: <yourAccessKey ID>
      akSecret: <yourAccessKey Secret>
    Note

    The Secret must be created in the namespace in which the application that uses the PV is deployed.

    Replace the values of the akId and akSecret parameters with the AccessKey ID and AccessKey secret obtained in Step 1.

  2. Run the following command to create a statically provisioned PV:

    kubectl create -f pv-oss.yaml

    The following pv-oss.yaml sample file is used to create the statically provisioned PV:

    apiVersion: v1
    kind: PersistentVolume
    metadata:
      name: pv-oss
      labels:
        alicloud-pvname: pv-oss
    spec:
      capacity:
        storage: 5Gi
      accessModes:
        - ReadOnlyMany
      persistentVolumeReclaimPolicy: Retain
      csi:
        driver: ossplugin.csi.alibabacloud.com
        volumeHandle: pv-oss # Specify the name of the PV. 
        nodePublishSecretRef:
          name: oss-secret
          namespace: default
        volumeAttributes:
          bucket: "oss"
          url: "oss-cn-hangzhou.aliyuncs.com"
          otherOpts: "-o umask=022 -o max_stat_cache_size=0 -o allow_other"
          path: "/"

    Parameter

    Description

    name

    The name of the PV.

    labels

    The labels that are added to the PV.

    storage

    The available storage of the OSS bucket.

    accessModes

    The access mode. Valid values: ReadOnlyMany and ReadWriteMany.

    If you select ReadOnlyMany, ossfs mounts the OSS bucket in read-only mode.

    persistentVolumeReclaimPolicy

    The reclaim policy of the PV.

    driver

    The type of driver. In this example, the parameter is set to ossplugin.csi.alibabacloud.com. This indicates that the OSS CSI component is used.

    nodePublishSecretRef

    The Secret from which the AccessKey pair is retrieved when an OSS bucket is mounted as a PV.

    volumeHandle

    The name of the PV.

    bucket

    The OSS bucket that you want to mount.

    url

    The endpoint of the OSS bucket that you want to mount.

    • If the node and the OSS bucket reside in the same region, or are connected through a virtual private cloud (VPC), use an internal endpoint of the OSS bucket.

    • If the node and the OSS bucket reside in different regions, use a public endpoint of the OSS bucket.

    Endpoint formats:

    • Internal endpoint: http://oss-{{regionName}}-internal.aliyuncs.com or https://oss-{{regionName}}-internal.aliyuncs.com

    • Public endpoint: http://oss-{{regionName}}.aliyuncs.com or https://oss-{{regionName}}.aliyuncs.com.

    Important
    • The actual endpoint is displayed on the Overview page in the OSS console.

    • The internal access port for vpc100-oss-{{regionName}}.aliyuncs.com is deprecated. Make sure to update your configuration accordingly.

    otherOpts

    You can configure custom parameters in the -o *** -o *** format for the OSS volume. Example: -o umask=022 -o max_stat_cache_size=0 -o allow_other.

    umask: modifies the permission mask of files in ossfs. For example, if you specify umask=022, the permission mask of files in ossfs changes to 755. By default, the permission mask of files uploaded by using OSS SDKs or console is 640 in ossfs. Therefore, we recommend that you specify the umask parameter if you want to split reads and writes.

    max_stat_cache_size: the maximum number of files whose metadata can be cached. Metadata caching can accelerate ls operations. However, if you modify files by using methods such as OSS SDKs, console, and ossutil, the cached metadata of the files is not synchronously updated. As a result, the cached metadata becomes outdated, and the results of ls operations may be inaccurate.

    allow_other: allows other users to access the mounted directory. However, these users cannot access the files in the directory.

    For more information about other parameters, see Options supported by ossfs.

    path

    The path relative to the root directory of the OSS bucket to be mounted. Default value: /. This parameter is supported by csi-plugin of V1.14.8.32-c77e277b-aliyun and later.

    For ossfs versions earlier than 1.91, you must create this path in OSS in advance. For more information, see Features of ossfs 1.91 and later and ossfs performance benchmarking.

    1. Log on to the ACK console. In the left-side navigation pane, click Clusters.

    2. On the Clusters page, find the cluster that you want to manage and click its name. In the left-side navigation pane, choose Volumes > Persistent Volumes.

      On the Persistent Volumes page, you can find the PV that you created.

  3. Run the following command to create a statically provisioned PVC:

    kubectl create -f pvc-oss.yaml

    The following pvc-oss.yaml sample file is used to create the statically provisioned PVC:

    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
      name: pvc-oss
    spec:
      accessModes:
        - ReadOnlyMany
      resources:
        requests:
          storage: 5Gi
      selector:
        matchLabels:
          alicloud-pvname: pv-oss

    Parameter

    Description

    name

    The name of the PVC.

    accessModes

    The access mode. Valid values: ReadOnlyMany and ReadWriteMany.

    If you select ReadOnlyMany, ossfs mounts the OSS bucket in read-only mode.

    storage

    The capacity claimed by the PVC. The claimed capacity cannot exceed the capacity of the PV that is bound to the PVC.

    alicloud-pvname

    The labels that are used to select and bind a PV to the PVC. The labels must be the same as those of the PV to be bound to the PVC.

    In the left-side navigation pane, choose Volumes > Persistent Volume Claims. On the Persistent Volume Claims page, you can find the created PVC.

Method 2: Specify an AccessKey pair when you create a PV and a PVC

  1. Run the following command to create a PV by using a configuration file in which an AccessKey pair is directly specified:

    kubectl create -f pv-accesskey.yaml

    The following pv-accesskey.yaml sample file shows how to specify an AccessKey pair in a PV configuration file:

    apiVersion: v1
    kind: PersistentVolume
    metadata:
      name: pv-oss
      labels:
        alicloud-pvname: pv-oss
    spec:
      capacity:
        storage: 5Gi
      accessModes:
        - ReadOnlyMany
      persistentVolumeReclaimPolicy: Retain
      csi:
        driver: ossplugin.csi.alibabacloud.com
        volumeHandle: pv-oss # Specify the name of the PV. 
        volumeAttributes:
          bucket: "oss"
          url: "oss-cn-hangzhou.aliyuncs.com"
          otherOpts: "-o umask=022 -o max_stat_cache_size=0 -o allow_other"
          akId: "***"
          akSecret: "***"

    Parameter

    Description

    name

    The name of the PV.

    labels

    The labels that are added to the PV.

    storage

    The available storage of the OSS bucket.

    accessModes

    The access mode. Valid values: ReadOnlyMany and ReadWriteMany.

    If you select ReadOnlyMany, ossfs mounts the OSS bucket in read-only mode.

    persistentVolumeReclaimPolicy

    The reclaim policy of the PV.

    driver

    The type of driver. In this example, the parameter is set to ossplugin.csi.alibabacloud.com. This indicates that the OSS CSI component is used.

    volumeHandle

    The name of the PV.

    bucket

    The OSS bucket that you want to mount.

    url

    The endpoint of the OSS bucket that you want to mount.

    • If the node and the OSS bucket reside in the same region, or are connected through a virtual private cloud (VPC), use an internal endpoint of the OSS bucket.

    • If the node and the OSS bucket reside in different regions, use a public endpoint of the OSS bucket.

    Endpoint formats:

    • Internal endpoint: http://oss-{{regionName}}-internal.aliyuncs.com or https://oss-{{regionName}}-internal.aliyuncs.com

    • Public endpoint: http://oss-{{regionName}}.aliyuncs.com or https://oss-{{regionName}}.aliyuncs.com.

    Important
    • The actual endpoint is displayed on the Overview page in the OSS console.

    • The internal access port for vpc100-oss-{{regionName}}.aliyuncs.com is deprecated. Make sure to update your configuration accordingly.

    otherOpts

    You can configure custom parameters in the -o *** -o *** format for the OSS volume. Example: -o umask=022 -o max_stat_cache_size=0 -o allow_other.

    umask: modifies the permission mask of files in ossfs. For example, if you specify umask=022, the permission mask of files in ossfs changes to 755. By default, the permission mask of files uploaded by using OSS SDKs or console is 640 in ossfs. Therefore, we recommend that you specify the umask parameter if you want to split reads and writes.

    max_stat_cache_size: the maximum number of files whose metadata can be cached. Metadata caching can accelerate ls operations. However, if you modify files by using methods such as OSS SDKs, console, and ossutil, the cached metadata of the files is not synchronously updated. As a result, the cached metadata becomes outdated, and the results of ls operations may be inaccurate.

    allow_other: allows other users to access the mounted directory. However, these users cannot access the files in the directory.

    For more information about other parameters, see Options supported by ossfs.

    path

    The path relative to the root directory of the OSS bucket to be mounted. Default value: /. This parameter is supported by csi-plugin of V1.14.8.32-c77e277b-aliyun and later.

    For ossfs versions earlier than 1.91, you must create this path in OSS in advance. For more information, see Features of ossfs 1.91 and later and ossfs performance benchmarking.

    akId

    The AccessKey ID that you obtained in Step 1.

    akSecret

    The AccessKey secret that you obtained in Step 1.

  2. Run the following command to create a statically provisioned PVC:

    kubectl create -f pvc-oss.yaml

    The following pvc-oss.yaml sample file is used to create the statically provisioned PVC:

    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
      name: pvc-oss
    spec:
      accessModes:
        - ReadOnlyMany
      resources:
        requests:
          storage: 5Gi
      selector:
        matchLabels:
          alicloud-pvname: pv-oss

    Parameter

    Description

    name

    The name of the PVC.

    accessModes

    The access mode. Valid values: ReadOnlyMany and ReadWriteMany.

    If you select ReadOnlyMany, ossfs mounts the OSS bucket in read-only mode.

    storage

    The capacity claimed by the PVC. The claimed capacity cannot exceed the capacity of the PV that is bound to the PVC.

    alicloud-pvname

    The labels that are used to select and bind a PV to the PVC. The labels must be the same as those of the PV to be bound to the PVC.

    In the left-side navigation pane, choose Volumes > Persistent Volume Claims. On the Persistent Volume Claims page, you can find the created PVC.

Step 3: Create an application

Create an application named oss-static and mount the PV to the application.

Run the following command to create a file named oss-static.yaml:

kubectl create -f oss-static.yaml

The following oss-static.yaml sample file is used to create the application:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: oss-static
  labels:
    app: nginx
spec:
  replicas: 2
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx
        ports:
        - containerPort: 80
        volumeMounts:
          - name: pvc-oss
            mountPath: "/data"
          - name: pvc-oss
            mountPath: "/data1"
        livenessProbe:
          exec:
            command:
            - sh
            - -c
            - cd /data
          initialDelaySeconds: 30
          periodSeconds: 30
      volumes:
        - name: pvc-oss
          persistentVolumeClaim:
            claimName: pvc-oss
  • livenessProbe: the health check configurations. For more information, see OSS volumes.

  • mountPath: the path to which the OSS bucket is mounted in the container.

  • claimName: the name of the PVC that the application uses.

Check whether the OSS volume can persist and share data

  1. View the pod that runs the oss-static application and the files in the OSS bucket.

    1. Run the following command to query the pod that runs the oss-static application:

      kubectl get pod

      Expected output:

      NAME                             READY   STATUS    RESTARTS   AGE
      oss-static-66fbb85b67-d****      1/1     Running   0          1h
      oss-static-66fbb85b67-l****      1/1     Running   0          1h
    2. Create a tmpfile file.

      • If the OSS volume is mounted in ReadWriteMany mode, run the following commands to create the tmpfile file in the /data path:

        kubectl exec oss-static-66fbb85b67-d**** -- touch /data/tmpfile
        kubectl exec oss-static-66fbb85b67-l**** -- touch /data/tmpfile
      • If the OSS volume is mounted in ReadOnlyMany mode, use the OSS console or cp command (upload objects) to upload the tmpfile file to the corresponding path in the OSS bucket.

  2. Run the following commands to query files in the /data path of the oss-static-66fbb85b67-d**** pod and query files in the /data1 path of the oss-static-66fbb85b67-l**** pod:

    kubectl exec oss-static-66fbb85b67-d**** -- ls /data | grep tmpfile
    kubectl exec oss-static-66fbb85b67-l**** -- ls /data1 | grep tmpfile

    Expected output:

    tmpfile

    The output indicates that the file exists in both pods. This means that the pods share the data stored in the OSS volume.

    Note

    If no output is returned, check whether the version of the CSI component is 1.20.7 or later. For more information, see csi-plugin.

  3. Run the following command to delete the oss-static-66fbb85b67-d**** pod:

    kubectl delete pod oss-static-66fbb85b67-d****

    Expected output:

    pod "oss-static-66fbb85b67-d****" deleted
  4. Verify that the file still exists after the pod is deleted.

    1. Run the following command to query the pod that is recreated:

      kubectl get pod

      Expected output:

      NAME                             READY   STATUS    RESTARTS   AGE
      oss-static-66fbb85b67-l****      1/1     Running   0          1h
      oss-static-66fbb85b67-z****      1/1     Running   0          40s
    2. Run the following command to query files in the /data path of the pod:

      kubectl exec oss-static-66fbb85b67-z**** -- ls /data | grep tmpfile

      Expected output:

      tmpfile

      The output indicates that the temfile file still exists. This means that the OSS volume can persist data.