This topic provides answers to some frequently asked questions about Object Storage Service (OSS) volumes.
How do I manage the permissions related to OSS volume mounting?
What do I do if I fail to mount a statically provisioned OSS volume?
What do I do if I fail to access a statically provisioned OSS volume?
What do I do if the read speed of a statically provisioned OSS volume is slow?
Why is 0 displayed for the size of a file in the OSS console after I write data to the file?
Why is an OSS directory displayed as a file after I mount the OSS directory to a container?
Why does it require a long period of time to mount an OSS volume?
Issue
It requires a long period of time to mount an OSS volume.
Cause
If the following conditions are met, kubelet performs the chmod or chown operation when volumes are mounted, which increases the time consumption.
The AccessModes parameter is set to ReadWriteOnce in the persistent volume (PV) and persistent volume claim (PVC) templates.
The securityContext.fsgroup parameter is set in the application template.
Solution
If the securityContext.fsgroup parameter is set in the application template, delete the fsgroup parameter in the securityContext section.
If you want to configure the user ID (UID) and mode of the files in the mounted directory, you can manually mount the OSS bucket to an Elastic Compute Service (ECS) instance. You can then perform
chown
andchmod
operations through a CLI and provision the OSS volume through the CSI plug-in. For more information about how to provision OSS volumes by using the CSI plug-in, see Mount a statically provisioned OSS volume.Apart from the preceding methods, for clusters of Kubernetes 1.20 or later, you can set the fsGroupChangePolicy parameter to OnRootMismatch. This way, the
chmod
orchown
operation is performed only when the system launches the pod for the first time. As a result, it requires a long time to mount the OSS volume during the first launch. The issue does not occur when you mount OSS volumes after the first launch is complete. For more information about fsGroupChangePolicy, see Set the security context for a pod or a container.By default, the OSSFS PVC is read-only. We recommend that you do not perform write operations on the OSSFS PVC.
How do I manage the permissions related to OSS volume mounting?
By default, OSS volumes are mounted with root privileges of Linux. If you want to modify the mount options for a statically provisioned OSS volume, add the otherOpts field to the PV, such as otherOpts: "-o max_stat_cache_size=0 -o allow_other -o mp_umask=133"
. The following section describes the configurations:
Modify permission masks:
If the permission value of the mount directory is 644, add the following configuration to the otherOpts field:
-o mp_umask=133
.If the permission value of the files in the mount directory is 644, add the following configuration to the otherOpts field:
-o umask=133
.
Specify a role for the files in the mount directory:
To specify a group for the files in the mount directory, add the following configuration to the otherOpts field:
-o gid=XXX
. Replace XXX with the group ID in the/etc/password
file.To specify a user for the files in the mount directory, add the following configuration to the otherOpts field:
-o uid=XXX
. Replace XXX with the user ID in the/etc/password
file.
What do I do if I fail to mount a statically provisioned OSS volume?
Issue
You failed to mount a statically provisioned OSS volume. The pod cannot be started and a FailedMount event is generated.
Cause
Cause 1: The mount directory that you specified does not exist in the OSS bucket.
Cause 2: If the event contains the
Failed to find executable /usr/local/bin/ossfs: No such file or directory
message, the mounting failed because OSSFS failed to be installed on the node.
Solution
Solution for Cause 1
Specify a valid mount directory and then mount the statically provisioned OSS volume again.
Log on to the OSS console.
Create an OSS bucket. For more information, see Create buckets.
Mount the statically provisioned OSS volume again. For more information, see Mount a statically provisioned OSS volume.
Solution for Cause 2
Run the following command to restart csi-plugin on the node and check whether the pod can be started:
In the following code,
csi-plugin-****
specifies the pod of csi-plugin.kubectl -n kube-system delete pod csi-plugin-****
Log on to the node and run the following command:
ls /etc/csi-tool
Expected output:
... ossfs_<ossfsVer>_<ossfsArch>_x86_64.rpm ...
If the output contains an RPM package for OSSFS, run the following command to check whether the pod can be started.
rpm -i /etc/csi-tool/ossfs_<ossfsVer>_<ossfsArch>_x86_64.rpm
If the output does not contain an RPM package for OSSFS, submit a ticket.
What do I do if I fail to access a statically provisioned OSS volume?
Issue
You failed to access a statically provisioned OSS volume.
Cause
You did not specify an AccessKey pair when you mount the statically provisioned OSS volume.
Solution
Specify an AccessKey pair in the configurations of the statically provisioned OSS volume. For more information, see Mount a statically provisioned OSS volume.
What do I do if the read speed of a statically provisioned OSS volume is slow?
Issue
The read speed of a statically provisioned OSS volume is slow.
Cause
The number of files that you can upload to an OSS bucket is unlimited. However, if the number of files in an OSS bucket is more than 1,000, the Filesystem in Userspace (FUSE) of OSS needs to access a large amount of metadata. As a result, the read speed of the OSS bucket is slow.
Solution
When you mount an OSS volume to a container, we recommend that you set the access mode of the OSS volume to read-only. If an OSS bucket stores a large number of files, we recommend that you use the OSS SDK or CLI to access the files in the bucket, instead of accessing the files by using a file system. For more information, see SDK demos overview.
Why is 0 displayed for the size of a file in the OSS console after I write data to the file?
Issue
After you write data to an OSS volume mounted to a container, the size of the file displayed in the OSS console is 0.
Cause
The OSS volume is mounted by using OSSFS, which is developed based on FUSE. In this case, the data in the file is uploaded to the OSS server only after the close or flush operation is performed on the file.
Solution
Run the lsof command with the name of the file to check whether the file is being used by processes. If the file is being used by processes, terminate the processes to release the file descriptor (FD) of the file. For more information about the lsof command, see lsof.
Why is an OSS directory displayed as a file after I mount the OSS directory to a container?
Issue
After you mount an OSS directory to a container, the directory is displayed as a file.
Cause
When the client retrieves file metadata from the OSS server, the x-oss-meta-mode
field is missing, which causes the system to parse the OSS directory as a file.
Solution
When you mount the statically provisioned OSS volume, add the -o complement_stat
field to the otherOpts section in the configurations of the PV.