This topic describes possible problems that you may encounter when you use ossfs and their solutions.

ossfs errors are reported with messages. You must collect these messages to determine what problems have occurred and how to troubleshoot them. For examples, you may encounter socket connection failures or errors with HTTP status codes 4xx and 5xx. You must enable the debug logging feature before troubleshooting.
  • Errors with HTTP status code 403 occur when access is denied because the user is unauthorized.
  • Errors with HTTP status code 400 occur due to incorrect operations.
  • Errors with HTTP status code 5xx occur due to network jitters and client services.
Note If ossfs cannot meet your business requirements, you can use ossutil.
  • ossfs mounts remote OSS buckets to local disks. We recommend that you do not use ossfs to handle services that require high read and write performance.
  • ossfs operations are not atomic, so there is a risk that local operations may succeed while OSS remote operations fail.

Case: The "conflicts with file from package fuse-devel" message is returned when you use yum/apt-get to install ossfs.

Analysis: This error occurs because an earlier version of fuse is present in the system and conflicts with the dependent version of ossfs.

Solution: Use a package manager to uninstall fuse and then reinstall ossfs.

Case: The "fuse: warning: library too old, some operations may not work" message is returned during the installation of ossfs.

Analysis: This error usually occurs because you install libfuse on your own, and the libfuse version used during ossfs compilation is later than that is linked to ossfs during runtime. The ossfs installation package provided by Alibaba Cloud contains libfuse 2.8.4. When you install ossfs in CentOS 5.x or CentOS 6.x, if libfuse 2.8.3 already exists in the system and is linked to ossfs, this error will occur.

You can use the ldd $(which ossfs) | grep fuse command to check the fuse version that is linked to ossfs during runtime. If the command output is /lib64/libfuse.so.2, you can use the ls -l /lib64/libfuse* command to check the fuse version.

Solution: Link ossfs to the fuse version provided in the ossfs installation package.
  1. Run the rpm -ql ossfs | grep fuse command to locate the directory of libfuse.
  2. If the command output is /usr/lib/libfuse.so.2, use the LD_LIBRARY_PATH=/usr/lib ossfs... command to run ossfs.

Case: An error occurs during the installation of the fuse dependency library.

Analysis: This error occurs because the version of fuse does not meet the requirements of ossfs.

Solution: Download and install the latest version of fuse. Do not use yum to install fuse. For more information, visit libfuse.

Case: The "oss: unable to access MOUNTPOINT /tmp/ossfs: Transport endpoint is not connected" message is returned while ossfs mounts an OSS bucket.

Analysis: This error occurs because the destination directory of the OSS bucket is not created.

Solution: Create the destination directory, and then mount the bucket.

Case: The "fusermount: failed to open current directory: Permission denied" message is returned while ossfs mounts an OSS bucket.

Analysis: This error occurs due to a bug in fuse that requires you to have the read permissions on the current directory instead of the destination directory of the OSS bucket.

Solution: Run the cd command to switch to a directory on which you have the read permissions, and then use ossfs to mount the bucket.

Case: The "ossfs: MOUNTPOINT directory /tmp/ossfs is not empty. if you are sure this is safe, can use the 'nonempty' mount option" message is returned while ossfs mounts an OSS bucket.

Analysis: By default, ossfs can only mount an OSS bucket to an empty directory. This error occurs when ossfs attempts to mount a bucket to a directory that is not empty.

Solution: Switch to an empty directory and re-mount the bucket. If you still want the bucket to be mounted to the current directory, you must use the -ononempty option.

Case: The "operation not permitted" message is returned when the ls command is run to list objects in the directory after a bucket is mounted.

Analysis: The file system has strict limitations on object names and directory names. This error occurs when the names of objects in your buckets contain non-printable characters.

Solution: Use a tool to rename the objects, and then run the ls command. Objects in the directory are displayed.

Case: ossfs is disconnected occasionally.

Analysis:
  1. ossfs debug logs are collected and the -d -o f2 parameter is used. ossfs logs are written in the /var/log/message file of the system.
  2. Log analysis shows that ossfs requests too much memory for the listbucket and listobject operations, which triggers an Out Of Memory (OOM) error.
    Note The listobject operation sends an HTTP request to OSS to obtain object metadata. If you have a large number of objects, running the ls command requires a large amount of memory to obtain the object metadata.
Solution:
  • Specify the -omax_stat_cache_size=xxx parameter to increase the size of stat cache. In this case, the first running of the ls command will be slow, but subsequent running of the command will be fast because metadata of the objects is stored in the local cache. The default value of this parameter is 1000. The metadata of 1,000 objects consumes about 4 MB of memory. You can adjust the value based on the memory size of your machine.
  • ossfs writes a large number of files in TempCache during read or write operations, which is similar to NGINX. this may result in insufficient disk space. Therefore, you can frequently clear cache space to solve the problem.
  • Use ossutil instead of ossfs. You can use ossfs for services that do not require high real-timeliness. We recommend that you use ossutil for services that require high reliability and stability.

Case: The "THE bucket you are attempting to access must be addressed using the specified endpoint" message is returned when you access a bucket.

Analysis: This error occurs because you are not using the correct endpoint to access the bucket. This error may occur in the following scenarios.
  • The bucket and endpoint do not match.
  • The UID of the bucket owner is different from that of the Alibaba Cloud account that corresponds to the AccessKey pair.

Solution: Check whether the configurations are correct and modify the configurations if necessary.

Case: The "input/output error" message is returned when the cp command is used to copy data.

Analysis: This error occurs when system disk errors are captured. You can check if there are heavy read and write loads on the disk when this error occurs.

Solution: Specify multipart parameters to control the speed of read and write operations on objects. You can use the ossfs -h command to view the multipart parameters.

Case: The "input/output error" message is returned during synchronization by using rsync.

Analysis: This error occurs when ossfs is used with rsync. In this case, the cp command is used to copy a large object of 141 GB in size, causing very heavy read and write loads on the disk.

Solution: Use ossutil to download OSS objects to a local ECS instance or upload objects from a local device to an ECS instance in multipart mode.

Case: The "There is no enough disk space for used as cache(or temporary)" message is returned when ossfs is used to upload a large object.

Analysis: ossfs uploads a large object in multipart mode. The default size of each part is 10 MB. The maximum number of parts that can be uploaded is 1,000.

When ossfs uploads an object, it writes temporary cache files to the /tmp directory. The available space of the disk in which the /tmp directory is located must be greater than the total size of the object to upload. If the available disk space is smaller than the total size of the object to upload, this error will occur. This error may occur in the following scenarios:
  • Scenario 1: The available disk space is smaller than the total size of the object to upload. For example, the available disk space is 200 GB, but the size of the object to upload is 300 GB.
  • Scenario 2: The parameters for the part size and the number of upload threads are invalid. For example, the available disk space is 300 GB, and the size of the object to upload is 100 GB. The multipart_size parameter is set to 100 GB, and the number of upload threads is set to 5. In this case, ossfs determines that the size of the object to upload is 100 GB × 5 = 500 GB, which is greater than the available space of the disk.
Solution:
  • Scenario 1: Increase the available space of the disk.
  • Scenario 2: Set the part size in MB and the maximum number of parts to 1,000.

Case: A 403 error occurs when the touch command is run on an object in the mounted bucket.

Analysis: This error usually occurs when the operation is unauthorized. Possible scenarios are as follows:
  • The object is of the Archive storage class.
  • You are not authorized to manage the bucket by using your AccessKey pair.
Solution:
  • Restore the archived object to access it.
  • Grant necessary permissions to the account that uses the AccessKey pair.

Case: The values of the Content-Type parameters of all the objects that are uploaded to OSS by using ossfs are application/octet-stream.

Analysis: When uploading an object, ossfs queries the /etc/mime.types file to set the Content-Type parameter for the object. If the mime.types file does not exist, the value of the Content-Type parameter is set to application/octet-stream.

Solution: Check whether the mime.types file exists. If the file does not exist, add it.
  • Add the mime.types file by using commands
    • Ubuntu

      Use the sudo apt-get install mime-support command to add the file.

    • CentOS

      Use the sudo yum install mailcap command to add the file.

  • Manually add the mime.types file
    1. Create the mime.types file.
      vi /etc/mime.types
    2. Add the desired type in application/javascript js format, with one line per type.
Mount OSS again.

Why is running the ls command very slow when the directory contains many objects?

Analysis: If a directory contains N objects, OSS HTTP requests must be initiated N times for the ls command to be run to list the N objects in the directory. This can cause serious performance problems when the number of objects is large.

Solution: Specify the -omax_stat_cache_size=xxx parameter to increase the size of stat cache. In this case, the first running of the ls command will be slow, but subsequent running of the command will be fast because metadata of the objects is stored in the local cache. The default value of this parameter is 1000. The metadata of 1,000 objects consumes about 4 MB of memory. You can adjust the value based on the memory size of your machine.

Why is the information such as the size of an object displayed different from that displayed when other tools are used?

Analysis: When ossfs mounts a bucket, object metadata such as the size and ACL information of the objects in the bucket is cached. This can help facilitate the running of the ls command because it saves sending a request to OSS each time when the ls command is run. However, if the user modifies the object by using SDKs, the OSS console, or ossutil but ossfs does not obtain the updated object metadata in time, the information displayed will be different.

Solution: Specify the -omax_stat_cache_size=0 parameter to disable the metadata caching feature. In this case, a request is sent to OSS to obtain the latest object metadata each time when the ls command is run.

What do I do to avoid the cost of scanning objects by background programs when ossfs mounts OSS buckets to ECS instances?

Analysis: When a program scans a directory in which ossfs mounts a bucket, a request is sent to OSS. If many requests are sent, fees will be incurred.

Solution: Use the auditd tool to check which program scans the directory in which ossfs mounts the bucket. Follow these steps:
  1. Install and start auditd.
    sudo apt-get install auditd
    sudo service auditd start
  2. Set the directory to monitor. For example, use the following command to monitor the /mnt/ossfs directory.
    auditctl -w /mnt/ossfs
  3. Check the audit log to view which programs have accessed the directory.
    ausearch -i | grep /mnt/ossfs
  4. Set parameters to skip scheduled program scans.
    For example, if you find that the updatedb program scans the directory, you can modify the /etc/updatedb.conf configuration file to skip scanning by this program. Follow these steps:
    1. Set RUNEFS= to fuse.ossfs.
    2. Set PRUNEPATHS= to the directory.