When a node in your cluster stops running, data in the containers of stateful applications on the node may be lost, which compromises data reliability. To eliminate the risk of data loss, you can use persistent storage to persist data. This topic describes how to use an Object Storage Service (OSS) volume to persist data.
Background information
OSS is a secure, cost-effective, high-capacity, and highly reliable cloud storage service provided by Alibaba Cloud. You can mount an OSS bucket to multiple pods in a Container Service for Kubernetes (ACK) cluster.
Scenarios:
Average requirements for disk I/O.
Requirement for sharing data, including configuration files, images, and small video files.
OSS volume mounting procedure:
Create an OSS bucket.
Obtain the AccessKey ID and AccessKey secret of your Alibaba Cloud account.
Create a persistent volume (PV) and persistent volume claim (PVC) with a Secret.
Prerequisites
The kubeconfig file of the cluster is obtained and kubectl is used to connect to the cluster. For more information, see Obtain the kubeconfig file of a cluster and use kubectl to connect to the cluster.
An OSS bucket is created in the OSS console. For more information, see Create buckets.
Usage notes
The kubelet and the ossfs driver may be restarted when you upgrade the cluster. As a result, the mounted OSS directory becomes unavailable. In this case, you must recreate the pods to which the OSS volume is mounted. You can add health check settings in the YAML files of the pods and then restart the pods to remount the OSS volume when the OSS directory becomes unavailable.
The preceding issue is fixed in the latest component version.
Create a PV
Run the following command to create the Secret:
Replace
<your AccessKey ID>
and<your AccessKey Secret>
in the following command with the actual AccessKey ID and AccessKey secret of your Alibaba Cloud account. To obtain the AccessKey pair of your Alibaba Cloud account, go to the ACK console, move your pointer over theicon and click AccessKey.
kubectl create secret generic osssecret --from-literal=akId='<your AccessKey ID>' --from-literal=akSecret='<your AccessKey Secret>' --type=alicloud/oss -n default
osssecret
: the name of the Secret. You can specify a custom name.akId: the AccessKey ID.
akSecret: the AccessKey secret.
--type
: the type of Secret. In this example, the value is set toalicloud/oss
. The Secret and the pod that uses the Secret must belong to the same namespace.Use the following pv-oss.yaml file to create a PV:
apiVersion: v1 kind: PersistentVolume metadata: name: pv-oss labels: alicloud-pvname: pv-oss spec: capacity: storage: 5Gi accessModes: - ReadWriteMany storageClassName: oss flexVolume: driver: "alicloud/oss" secretRef: name: "osssecret" # Replace the value with the name of the Secret you created in the preceding step. options: bucket: "docker" // Replace the value with the name of the OSS bucket. path: /path // Replace the value with the relative path of the OSS subdirectory that you want to mount. url: "oss-cn-hangzhou.aliyuncs.com" // Replace the value with the endpoint of the OSS bucket. otherOpts: "-o max_stat_cache_size=0 -o allow_other" // Replace the value with custom parameter values.
Parameters
alicloud-pvname
: the name of the PV. You can specify the PV name in theselector
field of a PVC to bind the PV to the PVC.Bucket Name
: The name of the OSS bucket.path
: the path relative to the root directory of the OSS bucket that you want to mount. Default value: /. This parameter is supported by csi-plugin 1.14.8.32-c77e277b-aliyun and later.url
: the endpoint of the OSS bucket. To obtain the endpoint, perform the following steps:Log on to the OSS console.
In the left-side navigation pane, click Buckets. On the Buckets page, click the name of the bucket whose internal endpoint you want to obtain.
In the left-side navigation tree, click Overview.
In the Port section, you can view the endpoint of the bucket.
otherOpts
: the custom parameters that are used to mount the OSS bucket. The parameters must be in the-o *** -o ***
format.
Run the following command to create the PV:
kubectl create -f pv-oss.yaml
Expected output:
Log on to the ACK console. In the left-side navigation pane, click Clusters.
On the Clusters page, find the cluster that you want to manage and click its name. In the left-side pane, choose .
On the Persistent Volumes page, you can find the newly created PV.
Create a PVC
Create a PVC for the OSS bucket. Configure the selector
parameter of the PVC to select the PV you created. This way, the PV is automatically bound to the PVC after the PVC is created. Set the storageClassName
parameter to specify that only a PV of the OSS type can be bound to the PVC.
Create a file named pvc-oss.yaml.
kind: PersistentVolumeClaim apiVersion: v1 metadata: name: pvc-oss spec: accessModes: - ReadWriteMany storageClassName: oss resources: requests: storage: 5Gi selector: matchLabels: alicloud-pvname: pv-oss
Run the following command to create a PVC:
kubectl create -f pvc-oss.yaml
Expected output:
Log on to the ACK console. In the left-side navigation pane, click Clusters.
On the Clusters page, find the cluster that you want to manage and click its name. In the left-side pane, choose .
On the Persistent Volume Claims page, you can find the newly created PVC.
Create an application
Create a file named oss-static.yaml.
apiVersion: apps/v1 kind: Deployment metadata: name: oss-static labels: app: nginx spec: replicas: 1 selector: matchLabels: app: nginx template: metadata: labels: app: nginx spec: containers: - name: nginx image: anolis-registry.cn-zhangjiakou.cr.aliyuncs.com/openanolis/nginx:1.14.1-8.6 ports: - containerPort: 80 volumeMounts: - name: pvc-oss mountPath: "/data" - name: pvc-oss mountPath: "/data1" livenessProbe: exec: command: - sh - -c - cd /data initialDelaySeconds: 30 periodSeconds: 30 volumes: - name: pvc-oss persistentVolumeClaim: claimName: pvc-oss
NoteYou can set the
livenessProbe
field to configure health check settings. For more information, see OSS volumes.Run the following command to create a Deployment:
kubectl create -f oss-static.yaml d
Expected output:
Log on to the ACK console. In the left-side navigation pane, click Clusters.
On the Clusters page, find the cluster that you want to manage and click its name. In the left-side pane, choose .
On the Deployments page, you can find the newly created Deployment.
Verify data persistence
Run the following command to query the application pods:
kubectl get pod
Expected output:
NAME READY STATUS RESTARTS AGE oss-static-66fbb85b67-dqbl2 1/1 Running 0 1h
Run the following command to query files in the /data path of the pod:
kubectl exec oss-static-66fbb85b67-dqbl2 -- ls /data | grep tmpfile
NoteThe output indicates that no file exists in the /data path.
Run the following command to create a file named tmpfile in the /data path:
kubectl exec oss-static-66fbb85b67-dqbl2 -- touch /data/tmpfile
Run the following command to query files in the /data path of the pod:
kubectl exec oss-static-66fbb85b67-dqbl2 -- ls /data | grep tmpfile
Expected output:
tmpfile
Run the following command to delete the pod named oss-static-66fbb85b67-dqbl2:
kubectl delete pod oss-static-66fbb85b67-dqbl2
Expected output:
pod "oss-static-66fbb85b67-dqbl2" deleted
Open another terminal window and run the following command to view how the pod is deleted and recreated:
kubectl get pod -w -l app=nginx
Expected output:
NAME READY STATUS RESTARTS AGE oss-static-66fbb85b67-dqbl2 1/1 Running 0 78m oss-static-66fbb85b67-dqbl2 1/1 Terminating 0 78m oss-static-66fbb85b67-zlvmw 0/1 Pending 0 <invalid> oss-static-66fbb85b67-zlvmw 0/1 Pending 0 <invalid> oss-static-66fbb85b67-zlvmw 0/1 ContainerCreating 0 <invalid> oss-static-66fbb85b67-dqbl2 0/1 Terminating 0 78m oss-static-66fbb85b67-dqbl2 0/1 Terminating 0 78m oss-static-66fbb85b67-dqbl2 0/1 Terminating 0 78m oss-static-66fbb85b67-zlvmw 1/1 Running 0 <invalid>
Run the following command to query the name of the recreated pod:
kubectl get pod
Expected output:
NAME READY STATUS RESTARTS AGE oss-static-66fbb85b67-zlvmw 1/1 Running 0 40s
Run the following command to check whether the tmpfile file still exists in the /data path. If the tmpfile still exists in the /data path, data is persisted to the OSS volume.
kubectl exec oss-static-66fbb85b67-zlvmw -- ls /data | grep tmpfile
Expected output:
tmpfile