All Products
Search
Document Center

Object Storage Service:FAQ

Last Updated:Mar 13, 2024

This topic provides answers to some commonly asked questions about using ossfs.

Overview

An ossfs error includes a message, which can help you identify and resolve issues. For example, you can enable the debug logging feature to resolve socket connection failures or errors with HTTP status codes 4xx or 5xx.

  • Errors with HTTP status code 403 occur when access is denied due to a lack of the required access permissions.

  • Errors with HTTP status code 400 occur due to incorrect requests.

  • Errors with HTTP status code 5xx occur due to network jitters and server errors.

ossfs provides the following features:

  • ossfs mounts remote Object Storage Service (OSS) buckets to local disks. We recommend that you do not use ossfs to handle business applications that require high read and write performance.

  • ossfs operations are not atomic. This means that an operation may succeed locally but fail remotely on OSS.

If ossfs cannot meet your business requirements, you can use ossutil.

How do I resolve the "conflicts with file from package fuse-devel" message when I use yum/apt-get to install ossfs?

Analysis: This error occurs because an earlier version of fuse exists in the system and conflicts with the dependency version of ossfs.

Solution: Use a package manager to uninstall fuse and reinstall ossfs.

How do I resolve the "fuse: warning: library too old, some operations may not work" message when I install ossfs?

Analysis: This error usually occurs because you manually installed libfuse, and the libfuse version used for ossfs compiling is later than the version that is linked to ossfs during runtime. The ossfs installation package provided by Alibaba Cloud contains libfuse 2.8.4. When you install ossfs in CentOS 5.x or CentOS 6.x, this error occurs if libfuse 2.8.3 already exists in the system and is linked to ossfs.

You can run the ldd $(which ossfs) | grep fuse command to check the fuse version that is linked to ossfs during runtime. If the command output is /lib64/libfuse.so.2, you can run the ls -l /lib64/libfuse* command to check the fuse version.

Solution: Link ossfs to the applicable fuse version.

  1. Run the rpm -ql ossfs | grep fuse command to find the directory of libfuse.

  2. If the command output is /usr/lib/libfuse.so.2, run the LD_LIBRARY_PATH=/usr/lib ossfs... command to run ossfs.

How do I fix the error shown in the following figure when I install fuse?

Analysis: This error occurs because the version of fuse does not meet the requirements of ossfs.

Solution: Download and install the latest version of fuse. Do not use yum to install fuse. For more information, visit libfuse.

How do I resolve the "ossfs: unable to access MOUNTPOINT /tmp/ossfs: Transport endpoint is not connected" message for a mounting operation?

Analysis: This error occurs because the destination directory of the OSS bucket is not created.

Solution: Create the destination directory and then mount the bucket.

How do I resolve the "fusermount: failed to open current directory: Permission denied" message returned for a mounting operation?

Analysis: This error occurs due to a bug in fuse that requires you to have the read permissions on the current directory instead of the destination directory of the OSS bucket.

Solution: Run the cd command to switch to a directory on which you have the read permissions, and use ossfs to mount the bucket.

How do I resolve the "ossfs: Mountpoint directory /tmp/ossfs is not empty. if you are sure this is safe, can use the 'nonempty' mount option" message?

Analysis: By default, ossfs can mount an OSS bucket only to an empty directory. This error occurs when ossfs attempts to mount a bucket to a directory that is not empty.

Solution: Switch to an empty directory and re-mount the bucket. If you still want the bucket to be mounted to the current directory, use the -ononempty option.

How do I resolve the "ops-nginx-12-32 s3fs[163588]: [tid-75593]curl.cpp:CurlProgress(532): timeout now: 1656407871, curl_times[curl]: 1656407810, readwrite_timeout: 60" message returned when I mount a bucket?

Analysis: The mount operation timed out.

Solution: You need to increase the value of the readwrite_timeout option based on your actual business requirements. ossfs uses this option to specify the timeout period in seconds for read or write requests. The default timeout period is 60 seconds.

How do I resolve the "ossfs: credentials file /etc/passwd-ossfs should not have others permissions" message returned for a mount operation?

Analysis: The permissions on the /etc/passwd-ossfs file are incorrect.

Solution: The /etc/passwd-ossfs file contains access credentials. You need to deny others access to the file. To resolve this issue, modify the permissions on the file by running the chmod 640 /etc/passwd-ossfs command.

How do I resolve the "operation not permitted" message returned when I run the ls command to list objects in the directory after bucket mounting?

Analysis: The file system has strict limits on object names and directory names. This error occurs when the names of objects in the bucket contain invisible characters.

Solution: Rename the objects appropriately and then run the ls command. The objects in the directory are displayed.

How do I resolve the "Operation not permitted" message that is returned when I run the rm command to delete an object?

Analysis: When you use the rm command to delete an object, the DeleteObject operation is called to delete the object. If you mount a bucket by using a RAM user, check whether the RAM user has the permissions to delete the object.

Solution: Grant the RAM user the required permissions. For more information, see RAM policies and Common examples of RAM policies.

How do I resolve occasional disconnections from ossfs?

Analysis:

  1. Enable the debug logging feature and add the -d -o f2 parameter to allow ossfs to write logs to /var/log/message.

  2. Log analysis shows that ossfs requests a large amount of memory for the listbucket and listobject operations. This triggers an out of memory (OOM) error.

    Note

    The listobject operation sends an HTTP request to OSS to obtain object metadata. If you have a large number of objects, running the ls command requires a large amount of memory to obtain the object metadata.

Solutions:

  • Specify the -omax_stat_cache_size=xxx parameter to increase the size of stat cache. The object metadata is stored in the local cache. Therefore, the first run of the ls command is slow, but subsequent executions of the command is fast. The default value of this parameter is 1000. The metadata of 1,000 objects consumes approximately 4 MB of memory. You can adjust the value based on the memory size of your machine.

  • ossfs writes a large number of files in TempCache during read or write operations, which is similar to NGINX. This may result in insufficient disk space. After ossfs exits, temporary files are automatically cleared.

  • Use ossutil instead of ossfs. You can use ossfs for business applications that do not require high real-time performance. We recommend that you use ossutil for business applications that require high reliability and stability.

How do I resolve error message "The bucket you are attempting to access must be addressed using the specified endpoint" that is returned when I access a bucket?

Analysis: This error occurs because you are not using the correct endpoint to access the bucket. This error may occur in the following scenarios.

  • The bucket and endpoint do not match.

  • The UID of the bucket owner is different from that of the Alibaba Cloud account that corresponds to the AccessKey pair.

Solution: Check whether the configurations are correct and modify the configurations if necessary.

How do I resolve the "input/output error" message that is returned when I run the cp command copy data?

Analysis: This error occurs when system disk errors are captured. You can check whether heavy read and write loads exist on the disk.

Solution: Specify multipart parameters to control the speed of read and write operations on objects. You can run the ossfs -h command to view the multipart parameters.

How do I resolve the "input/output error" message that is returned during synchronization by using rsync?

Analysis: This error occurs when ossfs is used with rsync. In your case, the cp command is run to copy a large object (141 GB). This causes heavy read and write loads on the disk.

Solution: Use ossutil to download OSS objects to a local Elastic Compute Service (ECS) instance or upload objects from a local device to an ECS instance in multipart mode.

How do I resolve the "There is no enough disk space for used as cache(or temporary)" message that is returned when ossfs is used to upload a large object?

  • Cause

    The disk space is less than the size that is specified by multiplying the values of multipart_size and parallel_count.

    multipart_size indicates the part size (default unit: MB). parallel_count indicates the number of parts that you want to upload in parallel (default value: 5).

  • Analysis

    By default, ossfs uploads large objects by using multipart upload. During the upload, ossfs writes the temporary cache file to the /tmp directory. Before ossfs writes the temporary cache file, it checks whether the available space of the disk where the /tmp directory is located is less than the size specified by multiplying the values of multipart_size and parallel_count. If the available space of the disk is greater than the size specified by multiplying the values of multipart_size and parallel_count, the temporary cache file is written normally. If the available disk space is less than the size specified by multiplying the values of multipart_size and parallel_count, the system reports that the available disk space is insufficient.

    For example, the available space of the disk is 300 GB and the size of the object that you want to upload is 200 GB, but multipart_size is set to 100000 (100 GB) and the number of parts that you want to upload in parallel is set to 5 (default value). In this case, ossfs determines that the size of the object that you want to upload is 500 GB (100 GB × 5). The size is greater than the available space of the disk.

  • Solution:

    If the number of parts that you want to upload in parallel remains at the default value of 5, specify a valid value for multipart_size:

    • For example, if the available space of the disk is 300 GB and the size of the object that you want to upload is 200 GB, set multipart_size to 20.

    • For example, if the available space of the disk is 300 GB and the size of the object that you want to upload is 500 GB, set multipart_size to 50.

How do I resolve an 403 error that is returned when the touch command is run on an object in the mounted bucket?

Analysis: This error usually occurs when the operation is unauthorized. This error may occur in the following scenarios:

  • The storage class of the object is Archive.

  • The AccessKey pair used does not have the required permissions to manage the bucket.

Solutions

  • Restore the Archive object or enable real-time access for Archive objects in the bucket.

  • Grant the required permissions to the Alibaba Cloud account that uses the AccessKey pair.

What do I do if the value of the Content-Type parameter of the objects that are uploaded to OSS by using ossfs is application/octet-stream?

Analysis: When you upload an object, ossfs queries the /etc/mime.types file to specify the Content-Type parameter for the object. If the mime.types file does not exist, the Content-Type parameter is set to application/octet-stream.

Solution: Check whether the mime.types file exists. If the file does not exist, add one.

  • Automatically add the mime.types file

    • Ubuntu

      Run the sudo apt-get install mime-support command to add the file.

    • CentOS

      Run the sudo yum install mailcap command to add the file.

  • Manually add the mime.types file

    1. Create the mime.types file.

      vi /etc/mime.types
    2. Add the desired type in the application/javascript js format. Each line holds one type.

Mount the bucket again.

Why does the ls command run very slowly when the directory contains a large number of objects?

Analysis: If a directory contains N objects, OSS HTTP requests must be initiated N times for running the ls command to list the N objects in the directory. This can cause serious performance issues when the number of objects is large.

Solution: Increase the stat cache size by using -omax_stat_cache_size=xxx. This way, the first run of the ls command is slow, but subsequent runs of the command are fast because metadata is in the local cache. The default value of this parameter is 1000. The metadata of 1,000 objects consumes approximately 4 MB of memory. You can adjust the value based on the memory size of your machine.

Why is the information such as the size of an object displayed different from that displayed when other tools are used?

Analysis: By default, ossfs caches object metadata items such as the size and access control list (ACL). Metadata caching accelerates object access by eliminating the need to send requests to OSS every time the Is command is run. However, if the user modifies the object metadata by using tools such as OSS SDKs, the OSS console, or ossutil, the changes are not updated in ossfs simultaneously due to metadata caching. As a result, metadata information that you see in ossfs is different from metadata that you see by using other tools.

Solution: Specify the -omax_stat_cache_size=0 parameter to disable the metadata caching feature. In this case, when the ls command is run, a request is sent to OSS to obtain the latest object metadata each time.

What do I do to avoid the cost of scanning objects by background programs when ossfs mounts an OSS bucket to an ECS instance?

Analysis: When a program scans a directory on which ossfs mounts a bucket, a request is sent to OSS. If a large number of requests are sent, you are charged for the requests.

Solution: Use the auditd tool to check for programs that scan the directory on which ossfs mounts the bucket. To do so, perform the following steps:

  1. Install and start auditd.

    sudo apt-get install auditd
    sudo service auditd start
  2. Set the directory on which ossfs mounts the bucket to the directory to monitor. For example, run the following command to monitor the /mnt/ossfs directory.

    auditctl -w /mnt/ossfs
  3. Check the audit log to view programs that accessed the directory.

    ausearch -i | grep /mnt/ossfs
  4. Set parameters to skip scheduled scans.

    For example, if the updatedb program scanned the directory, you can use /etc/updatedb.conf to skip scans by the program. To do so, perform the following steps:

    1. Append fuse.ossfs to RUNEFS =.

    2. Append the directory name to PRUNEPATHS =.

What do I do if ossfs failed to perform the mv operation on an object?

Cause: The source object may be in one of the following storage classes: Archive, Cold Archive, and Deep Cold Archive.

Solution: Before you perform the mv operation on an Archive, Cold Archive, or Deep Cold Archive object, restore the object first. For more information, see Restore objects.

Why does ossfs require a long period of time to mount a versioning-enabled bucket?

Cause: By default, ossfs lists objects by calling the ListObjects (GetBucket) operation. If versioning is enabled for a bucket, and the bucket contains one or more previous versions of objects and a large number of expired delete markers, the response speed decreases when you call the ListObjects (GetBucket) operation to list current object versions. In this case, ossfs requires a long period of time to mount a bucket.

Solution: Use the -olistobjectsV2 option to switch ossfs to the ListObjectsV2(GetBucketV2) operation for better object listing performance.

How do I mount a bucket over HTTPS by using ossfs?

ossfs allows you to mount a bucket over HTTPS by using ossfs. In this example, the China (Hangzhou) region is used. You can run the following command to mount a bucket to ossfs:

ossfs examplebucket /tmp/ossfs -o url=https://oss-cn-hangzhou.aliyuncs.com

Why does ossfs occupy the full storage capacity of a disk?

Cause: To improve performance, ossfs uses the disk to save temporary data that is uploaded or downloaded by default. In this case, the storage capacity of the disk may be exhausted.

Solution: You can use the -oensure_diskfree option to specify the reserved storage capacity of a disk. For example, if you want to specify a reserved storage capacity of 20 GB, run the following command:

ossfs examplebucket /tmp/ossfs -o url=http://oss-cn-hangzhou.aliyuncs.com -oensure_diskfree=20480

Why does the storage capacity of a disk change to 256 TB when the df command is run after ossfs mounts a bucket?

The storage capacity of a disk that is displayed when the df command is run does not indicate the actual storage capacity of the OSS bucket. Size (the total storage capacity of a disk) and Avail (the available storage capacity of a disk) are fixed at 256 TB, and Used (the used storage capacity of a disk) is fixed at 0 TB.

The storage capacity of an OSS bucket is unlimited. The used storage capacity varies based on your actual storage usage. For more information about bucket usage, see View the resource usage of a bucket.

Can I mount a bucket on Windows by using ossfs?

No, you cannot mount a bucket on Windows by using ossfs. You can use Rclone to mount a bucket on Windows. For more information, see Rclone.

How do I resolve the "fusermount: failed to unmount /mnt/ossfs-bucket: Device or resource busy" error message?

Analysis: A process is accessing objects in the /mnt/ossfs-bucket directory, so the bucket cannot be unmounted.

Solutions:

  1. Use lsof /mnt/ossfs-bucket to find the process that is accessing the directory.

  2. Run the kill command to stop the process.

  3. Use fusermount -u /mnt/ossfs-bucket to unmount the bucket.

How do I resolve the "fuse: device not found, try 'modprobe fuse'" error message?

Analysis: When you use ossfs to perform a mount operation in Docker, the "fuse: device not found, try 'modprobe fuse'" error message occurs commonly because the Docker container does not have required access permissions or the permissions to load the fuse kernel module.

Solution: When you use ossfs in a Docker container, specify the --privileged=true parameter to run the Docker container in privileged mode, so that processes in the container have capabilities that the host has, such as using the FUSE file system. The following sample command provides an example on how to run a Docker container with the --privileged flag:

docker run --privileged=true -d your_image

How do I resolve the input/output error that is returned when I use the Is command to list objects?

Cause: This error mainly occurs in CentOS, with NSS error -8023 in the error log. A communication problem occurs when ossfs uses libcurl to communicate over HTTPS. The communication problem may be caused by a too low version of the Network Security Services (NSS) library set that libcurl relies on.

Solution: Use the following code to upgrade the NSS library set.

yum update nss