This topic provides answers to some frequently asked questions about issues that may occur when ossfs is used and the solutions to these issues.

Introduction

A message is included in the ossfs error. You must collect these messages to identify and resolve the issues at the earliest opportunity. For example, if socket connection failures or errors with HTTP status codes 4xx and 5xx occur, you must enable the debug logging feature before you resolve the issues.

  • Errors with HTTP status code 403 occur when access is denied because the user is unauthorized.
  • Errors with HTTP status code 400 occur due to incorrect operations.
  • Errors with HTTP status code 5xx occur due to network jitters and client services.

ossfs provides the following features:

  • ossfs mounts remote Object Storage Service (OSS) buckets to local disks. We recommend that you do not use ossfs to handle services that require high read and write performance.
  • ossfs operations are not atomic. This means that local operations may succeed but OSS remote operations fail.

If ossfs cannot meet your business requirements, you can use ossutil.

The "conflicts with file from package fuse-devel" message is returned when you use yum/apt-get to install ossfs.

Analysis: This error occurs because an earlier version of fuse exists in the system and conflicts with the dependent version of ossfs.

Solution: Use a package manager to uninstall fuse and reinstall ossfs.

The "fuse: warning: library too old, some operations may not work" message is returned when ossfs is installed.

Analysis: This error usually occurs because you manually installed libfuse, and the libfuse version used when you compiled ossfs is later than the version that is linked to ossfs during runtime. The ossfs installation package provided by Alibaba Cloud contains libfuse 2.8.4. When you install ossfs in CentOS 5.x or CentOS 6.x, this error occurs if libfuse 2.8.3 already exists in the system and is linked to ossfs.

You can run the ldd $(which ossfs) | grep fuse command to check the fuse version that is linked to ossfs during runtime. If the command output is /lib64/libfuse.so.2, you can run the ls -l /lib64/libfuse* command to check the fuse version.

Solution: Link ossfs to the fuse version provided in the ossfs installation package.
  1. Run the rpm -ql ossfs | grep fuse command to find the directory of libfuse.
  2. If the command output is /usr/lib/libfuse.so.2, run the LD_LIBRARY_PATH=/usr/lib ossfs... command to run ossfs.

An error occurs when the fuse dependency library is installed.

Analysis: This error occurs because the version of fuse does not meet the requirements of ossfs.

Solution: Download and install the latest version of fuse. Do not use yum to install fuse. For more information, visit libfuse.

The "ossfs: unable to access MOUNTPOINT /tmp/ossfs: Transport endpoint is not connected" message is returned while ossfs mounts an OSS bucket.

Analysis: This error occurs because the destination directory of the OSS bucket is not created.

Solution: Create the destination directory and then mount the bucket.

The "fusermount: failed to open current directory: Permission denied" message is returned while ossfs mounts an OSS bucket.

Analysis: This error occurs due to a bug in fuse that requires you to have the read permissions on the current directory instead of the destination directory of the OSS bucket.

Solution: Run the cd command to switch to a directory on which you have the read permissions, and then use ossfs to mount the bucket.

The "ossfs: Mountpoint directory /tmp/ossfs is not empty. if you are sure this is safe, can use the 'nonempty' mount option" message is returned while ossfs mounts an OSS bucket.

Analysis: By default, ossfs can mount an OSS bucket only to an empty directory. This error occurs when ossfs attempts to mount a bucket to a directory that is not empty.

Solution: Switch to an empty directory and re-mount the bucket. If you still want the bucket to be mounted to the current directory, use the -ononempty option.

The "ops-nginx-12-32 s3fs[163588]: [tid-75593]curl.cpp:CurlProgress(532): timeout now: 1656407871, curl_times[curl]: 1656407810, readwrite_timeout: 60" message is returned while ossfs mounts an OSS bucket.

Analysis: The mount of the OSS bucket times out.

Solution: ossfs uses the readwrite_timeout option to specify the timeout period for read or write requests. Unit: seconds. Default value: 60. You must increase the value of this option based on your business scenarios.

The "operation not permitted" message is returned when the ls command is run to list objects in the directory after a bucket is mounted.

Analysis: The file system has strict limitations on object names and directory names. This error occurs when the names of objects in your buckets contain non-printable characters.

Solution: Use a tool to rename the objects, and then run the ls command. The objects in the directory are displayed.

ossfs is disconnected occasionally.

Analysis:
  1. ossfs debug logs are collected and the -d -o f2 parameter is used. ossfs logs are written in the /var/log/message file of the system.
  2. Log analysis shows that ossfs requests a large amount of memory for the listbucket and listobject operations. This triggers an out of memory (OOM) error.
    Note The listobject operation sends an HTTP request to OSS to obtain object metadata. If you have a large number of objects, running the ls command requires a large amount of memory to obtain the object metadata.
Solution:
  • Specify the -omax_stat_cache_size=xxx parameter to increase the size of stat cache. The object metadata is stored in the local cache. Therefore, the first running of the ls command is slow, but subsequent running of the command is fast. The default value of this parameter is 1000. The metadata of 1,000 objects consumes approximately 4 MB of memory. You can adjust the value based on the memory size of your machine.
  • ossfs writes a large number of files in TempCache during read or write operations, which is similar to NGINX. this may result in insufficient disk space. To resolve the issue, you can frequently clear cache space.
  • Use ossutil instead of ossfs. You can use ossfs for services that do not require high real-time performance. We recommend that you use ossutil for services that require high reliability and stability.

The "The bucket you are attempting to access must be addressed using the specified endpoint" message is returned.

Analysis: This error occurs because you are not using the correct endpoint to access the bucket. This error may occur in the following scenarios.
  • The bucket and endpoint do not match.
  • The UID of the bucket owner is different from that of the Alibaba Cloud account that corresponds to the AccessKey pair.

Solution: Check whether the configurations are correct and modify the configurations if necessary.

The "input/output error" message is returned when the cp command is used to copy data.

Analysis: This error occurs when system disk errors are captured. You can check whether heavy read and write loads exist on the disk.

Solution: Specify multipart parameters to control the speed of read and write operations on objects. You can run the ossfs -h command to view the multipart parameters.

The "input/output error" message is returned during synchronization by using rsync.

Analysis: This error occurs when ossfs is used with rsync. In this case, the cp command is run to copy a large object of 141 GB in size. This causes heavy read and write loads on the disk.

Solution: Use ossutil to download OSS objects to a local Elastic Compute Service (ECS) instance or upload objects from a local device to an ECS instance in multipart mode.

The "There is no enough disk space for used as cache(or temporary)" message is returned when ossfs is used to upload a large object.

  • Cause

    The disk space is less than multipart_size * parallel_count.

    multipart_size indicates the part size (default unit: MB). parallel_count indicates the number of parts that you want to upload in parallel (default value: 5).

  • Analysis

    By default, ossfs uploads large objects by using multipart upload. During the upload, ossfs writes the temporary cache file to the /tmp directory. Before ossfs writes the temporary cache file, check whether the available space of the disk where the /tmp directory is located is less than multipart_size * parallel_count. If the available space of the disk is larger than multipart_size * parallel_count, the temporary cache file is written normally. If the available space of the disk is less thanmultipart_size * parallel_count, the "there is no enough disk space for used as cache(or temporary)" message is returned.

    For example, the available space of the disk is 300 GB and the size of the object that you want to upload is 200 GB, but multipart_size is set to 100000 (100 GB) and the number of parts that you want to upload in parallel is set to 5 (default value). In this case, ossfs determines that the size of the object that you want to upload is 500 GB (100 GB × 5). The size is greater than the available space of the disk.

  • Solution

    If the number of parts that you want to upload in parallel remains at the default value of 5, specify a valid value for multipart_size:

    • For example, if the available space of the disk is 300 GB and the size of the object that you want to upload is 200 GB, set multipart_size to 20.
    • For example, if the available space of the disk is 300 GB and the size of the object that you want to upload is 500 GB, set multipart_size to 50.

An error with HTTP status code 403 occurs when the touch command is run on an object in the mounted bucket.

Analysis: This error usually occurs when the operation is unauthorized. This error may occur in the following scenarios:
  • The object is of the Archive storage class.
  • You do not have the permissions to manage the bucket by using your AccessKey pair.
Solution:
  • Restore the Archive object to access it.
  • Grant the required permissions to the Alibaba Cloud account that uses the AccessKey pair.

The values of the Content-Type parameters of the objects that are uploaded to OSS by using ossfs are application/octet-stream.

Analysis: When you upload an object, ossfs queries the /etc/mime.types file to specify the Content-Type parameter for the object. If the mime.types file does not exist, the Content-Type parameter is set to application/octet-stream.

Solution: Check whether the mime.types file exists. If the file does not exist, add it.
  • Add the mime.types file by running commands
    • Ubuntu

      Run the sudo apt-get install mime-support command to add the file.

    • CentOS

      Run the sudo yum install mailcap command to add the file.

  • Manually add the mime.types file
    1. Create the mime.types file.
      vi /etc/mime.types
    2. Add the desired type in the application/javascript js format, with one line per type.
Mount OSS again.

Why is running the ls command very slow when the directory contains many objects?

Analysis: If a directory contains N objects, OSS HTTP requests must be initiated N times for the ls command to be run to list the N objects in the directory. This can cause serious performance issues when the number of objects is large.

Solution: Specify the -omax_stat_cache_size=xxx parameter to increase the size of stat cache. In this case, the first running of the ls command will be slow, but subsequent running of the command will be fast because metadata of the objects is stored in the local cache. The default value of this parameter is 1,000. The metadata of 1,000 objects consumes approximately 4 MB of memory. You can adjust the value based on the memory size of your machine.

Why is the information such as the size of an object displayed different from that displayed when other tools are used?

Analysis: When ossfs mounts a bucket, object metadata such as the size and access control list (ACL) information of the objects in the bucket is cached. This can help facilitate the running of the ls command because it saves sending a request to OSS each time when the ls command is run. However, if the user modifies the object by using SDKs, the OSS console, or ossutil but ossfs does not obtain the updated object metadata in time, the information displayed is different.

Solution: Specify the -omax_stat_cache_size=0 parameter to disable the metadata caching feature. In this case, when the ls command is run, a request is sent to OSS to obtain the latest object metadata each time.

What do I do to avoid the cost of scanning objects by background programs when ossfs mounts OSS buckets to ECS instances?

Analysis: When a program scans a directory in which ossfs mounts a bucket, a request is sent to OSS. If many requests are sent, you are charged for the requests.

Solution: Use the auditd tool to check which program scans the directory in which ossfs mounts the bucket. Perform the following steps:
  1. Install and start auditd.
    sudo apt-get install auditd
    sudo service auditd start
  2. Set the directory in which ossfs mounts the bucket to the directory to monitor. For example, run the following command to monitor the /mnt/ossfs directory.
    auditctl -w /mnt/ossfs
  3. Check the audit log to view the programs that accessed the directory.
    ausearch -i | grep /mnt/ossfs
  4. Set parameters to skip scheduled program scans.
    For example, if you find that the updatedb program scans the directory, you can modify the /etc/updatedb.conf configuration file to skip scanning by this program. Perform the following steps:
    1. Set RUNEFS= to fuse.ossfs.
    2. Set PRUNEPATHS= to the directory.