All Products
Search
Document Center

Container Service for Kubernetes:Troubleshoot ossfs 1.0 client issues

Last Updated:Mar 26, 2026

OSS persistent volumes (PVs) are Filesystem in Userspace (FUSE) file systems mounted via ossfs. This topic covers two common failure modes — mount failures and POSIX command errors — and walks through how to retrieve and interpret ossfs logs for each.

How ossfs runs by CSI plugin version

The CSI plugin version determines where ossfs runs and how to retrieve its logs.

CSI plugin version ossfs runtime Troubleshooting method
Earlier than v1.28 Background process on the node where the application pod is located. Remount ossfs in the foreground to debug. Analyze debug logs
v1.28 to v1.30.4 Container in a pod in the kube-system namespace Query pod logs
v1.30.4 and later Container in a pod in the ack-csi-fuse namespace Query pod logs
Important

The default log level for containerized ossfs is critical or error. For most debugging, add debug parameters and remount the volume to get useful output.

Get ossfs logs

Before diagnosing a specific issue, retrieve ossfs logs for the affected volume. Use the commands below based on your CSI plugin version.

Find the VOLUME_ID

<VOLUME_ID> is typically the PV name. Retrieve the correct value in either of these cases:

  • If the PV name differs from its `volumeHandle` field, use volumeHandle:

    kubectl get pv <PV_NAME> -o jsonpath='{.spec.csi.volumeHandle}'
  • If the PV name is too long, <VOLUME_ID> is h1. followed by the SHA-1 hash of the PV name:

    echo -n "<PV_NAME>" | sha1sum | sed 's/^/h1./'

    If sha1sum is unavailable:

    echo -n "<PV_NAME>" | openssl sha1 -binary | xxd -p -c 40 | sed 's/^/h1./'

Check the ossfs pod status

Replace <VOLUME_ID> and <NODE_NAME> with the actual values.

CSI v1.30.4 and later (namespace: ack-csi-fuse):

kubectl -n ack-csi-fuse get pod -l csi.alibabacloud.com/volume-id=<VOLUME_ID> -o wide | grep <NODE_NAME>

Because v1.30.4 adds a daemon process to the ossfs container, the expected output is the same regardless of whether ossfs is running correctly:

NAME                   READY   STATUS    RESTARTS   AGE
csi-fuse-ossfs-xxxx    1/1     Running   0          5s

If the pod is not in the Running state, resolve that issue before proceeding.

CSI versions earlier than v1.30.4 (namespace: kube-system):

kubectl -n kube-system get pod -l csi.alibabacloud.com/volume-id=<VOLUME_ID> -o wide | grep <NODE_NAME>

If ossfs exited unexpectedly, the output shows CrashLoopBackOff:

NAME                   READY   STATUS             RESTARTS     AGE
csi-fuse-ossfs-xxxx    0/1     CrashLoopBackOff   1 (4s ago)   5s

Retrieve pod logs

CSI v1.30.4 and later:

kubectl -n ack-csi-fuse logs csi-fuse-ossfs-xxxx

CSI versions earlier than v1.30.4:

kubectl -n kube-system logs csi-fuse-ossfs-xxxx

If the pod is in CrashLoopBackOff, retrieve logs from the previous exit:

kubectl -n kube-system logs -p csi-fuse-ossfs-xxxx

Enable debug logging (optional)

If the logs are empty or not detailed enough, raise the log level. Two methods are available:

Method 1: Add debug parameters to the PV (recommended)

Create an OSS PV for debugging. Based on the original PV configuration, add -o dbglevel=debug -o curldbg to the otherOpts field. After you mount the new OSS PV, run kubectl logs to retrieve debug output from the ossfs pod.

Important

Debug logs can be large. Use this setting only while debugging.

Method 2: Use a ConfigMap (global)

Create a ConfigMap named csi-plugin in the kube-system namespace:

apiVersion: v1
kind: ConfigMap
metadata:
  name: csi-plugin
  namespace: kube-system
data:
  fuse-ossfs: |
    dbglevel=debug     # Log level
For CSI plugin v1.28.2 and earlier, the maximum log level is info.

Restart the csi-plugin pod on the affected node and all csi-provisioner pods, then restart the application pod to trigger a remount. Confirm that the csi-fuse-ossfs-xxxx pod is redeployed after the remount.

Important

The ConfigMap applies globally. After debugging, delete the ConfigMap, restart the csi-plugin pod and all csi-provisioner pods, then restart the application pod to disable debug logging.

Understand ossfs error categories

ossfs errors fall into two categories:

Scenario 1: Pod stuck in ContainerCreating (mount failure)

CSI plugin v1.26.6 and later

Symptoms

The application pod stays in the ContainerCreating state when starting.

Cause

When the CSI component starts the ossfs container, ossfs exits before completing the mount. Common causes include a failed OSS connectivity check (bucket does not exist, incorrect permissions), a non-existent mount path, or insufficient read/write permissions.

Solution

  1. Retrieve the ossfs container logs using the commands in Get ossfs logs.

  2. If the logs are empty or insufficient, enable debug logging as described in Enable debug logging.

  3. Analyze the logs. Example: ossfs error — mount point not empty

    ossfs: MOUNTPOINT directory /test is not empty. if you are sure this is safe, can use the 'nonempty' mount option.

    The mount point directory is not empty. Add -o nonempty to the PV mount options. Example: OSS server error — bucket does not exist

    [ERROR]  2023-10-16 12:38:38:/tmp/ossfs/src/curl.cpp:CheckBucket(3420): Check bucket failed, oss response: <?xml version="1.0" encoding="UTF-8"?>
    <Error>
      <Code>NoSuchBucket</Code>
      <Message>The specified bucket does not exist.</Message>
      <RequestId>652D2ECEE1159C3732F6E0EF</RequestId>
      <HostId><bucket-name>.oss-<region-id>-internal.aliyuncs.com</HostId>
      <BucketName><bucket-name></BucketName>
      <EC>0015-00000101</EC>
      <RecommendDoc>https://api.aliyun.com/troubleshoot?q=0015-00000101</RecommendDoc>
    </Error>

    The specified OSS bucket does not exist. Log in to the OSS consoleOSS console, create the bucket, then remount the volume.

CSI plugin versions earlier than v1.26.6

Symptoms

The application pod stays in the ContainerCreating state when starting.

Cause

ossfs exits when the CSI component tries to start it, so no ossfs process runs on the node. The pod event shows a FailedMount warning. Common causes are the same as for v1.26.6 and later: bucket does not exist, incorrect permissions, non-existent mount path, or insufficient read/write permissions.

For a quick check, see An OSS volume fails to mount and the application pod event shows FailedMount.

Solution

Step 1: Get the original ossfs startup command

  1. Describe the pod to view the FailedMount event:

    kubectl -n <POD_NAMESPACE> describe pod <POD_NAME>
  2. In the Events section, find the FailedMount entry:

    Warning  FailedMount  3s  kubelet  MountVolume.SetUp failed for volume "<PV_NAME>" : rpc error: code = Unknown desc = Mount is failed in host, mntCmd:systemd-run --scope -- /usr/local/bin/ossfs <BUCKET>:/<PATH> /var/lib/kubelet/pods/<POD_UID>/volumes/kubernetes.io~csi/<PV_NAME>/mount -ourl=oss-cn-beijing-internal.aliyuncs.com -o allow_other , err: ..... with error: exit status 1
  3. Extract the original ossfs startup command from mntCmd — the content after systemd-run --scope -- :

    /usr/local/bin/ossfs <BUCKET>:/<PATH> /var/lib/kubelet/pods/<POD_UID>/volumes/kubernetes.io~csi/<PV_NAME>/mount -ourl=oss-cn-beijing-internal.aliyuncs.com -o allow_other

Step 2: Mount ossfs in the foreground and get debug logs

By default, only the user who runs the mount command can access the ossfs-mounted directory. If the original command omits -o allow_other, permission issues may occur on the root mount path.

  1. Check whether the original command includes -o allow_other. If not, add it when creating the PV. For details, see Mount a bucket and Use a statically provisioned ossfs 1.0 volume.

  2. On the node where the application pod is located, run ossfs in the foreground with debug logging:

    Parameter Description
    -f Runs ossfs in the foreground instead of as a daemon. Logs are written to the terminal.
    -o allow_other Grants other users access to the mounted directory, preventing mount point permission issues.
    -o dbglevel=debug Sets the log level to debug.
    -o curldbg Enables libcurl logging to surface HTTP errors from OSS.
    mkdir /test && /usr/local/bin/ossfs <BUCKET>:/<PATH> /test -ourl=oss-cn-beijing-internal.aliyuncs.com -f -o allow_other -o dbglevel=debug -o curldbg

Step 3: Analyze the debug logs

Logs are written to the terminal while ossfs runs in the foreground.

Example: ossfs error — mount point not empty

ossfs: MOUNTPOINT directory /test is not empty. if you are sure this is safe, can use the 'nonempty' mount option.

Add -o nonempty to the PV mount options.

Example: OSS server error — bucket does not exist

[INFO]       Jul 10 2023:13:03:47:/tmp/ossfs/src/curl.cpp:RequestPerform(2382): HTTP response code 404 was returned, returning ENOENT, Body Text: <?xml version="1.0" encoding="UTF-8"?>
<Error>
  <Code>NoSuchBucket</Code>
  <Message>The specified bucket does not exist.</Message>
  <RequestId>xxxx</RequestId>
  <HostId><BUCKET>.oss-cn-beijing-internal.aliyuncs.com</HostId>
  <BucketName><BUCKET></BucketName>
  <EC>0015-00000101</EC>
</Error>

Log in to the OSS consoleOSS console, create the bucket, then remount the volume.

Scenario 2: POSIX command error (pod running)

CSI plugin v1.26.6 and later

Symptoms

The application pod is in the Running state, but ossfs returns an error when a POSIX command (such as read, write, or chmod) is executed.

Cause

ossfs mounts the OSS bucket correctly, but fails when processing certain POSIX operations. Check the application logs to identify the failing command and error type — for example, an I/O error on chmod -R 777 /mnt/path.

Solution

  1. Confirm the ossfs pod is running using the commands in Get ossfs logs.

  2. Retrieve the ossfs container logs.

  3. If the logs are insufficient, enable debug logging as described in Enable debug logging.

  4. Analyze the logs. Example: ossfs error — chmod on mount point Command that triggered the error:

    chmod -R 777 /test

    The chmod operation succeeds for files inside /test, but fails on the mount point itself:

    [ERROR]  2023-10-18 06:03:24:/tmp/ossfs/src/ossfs.cpp:ossfs_chmod(1745): Could not change mode for mount point.

    Changing the permissions of the mount point itself is not supported. For workarounds, see the ossfs 1.0 volume FAQ. Example: OSS server error — object not found

    [INFO]        2023-10-18 06:05:46:/tmp/ossfs/src/curl.cpp:HeadRequest(3014): [tpath=/xxxx]
    [INFO]        2023-10-18 06:05:46:/tmp/ossfs/src/curl.cpp:PreHeadRequest(2971): [tpath=/xxxx][bpath=][save=][sseckeypos=-1]
    [INFO]        2023-10-18 06:05:46:/tmp/ossfs/src/curl.cpp:prepare_url(4660): URL is http://oss-cn-beijing-internal.aliyuncs.com/<bucket>/<path>/xxxxx
    [INFO]        2023-10-18 06:05:46:/tmp/ossfs/src/curl.cpp:prepare_url(4693): URL changed is http://<bucket>.oss-cn-beijing-internal.aliyuncs.com/<path>/xxxxx
    [INFO]        2023-10-18 06:05:46:/tmp/ossfs/src/curl.cpp:RequestPerform(2383): HTTP response code 404 was returned, returning ENOENT, Body Text:

    The object does not exist on the OSS server. For causes and solutions, see 404 error.

CSI plugin versions earlier than v1.26.6

Symptoms

The application pod is in the Running state, but ossfs returns an error when a POSIX command is executed. For example:

kubectl -n <POD_NAMESPACE> exec -it <POD_NAME> -- /bin/bash

bash-4.4# chmod -R 777 /mnt/path
chmod: /mnt/path: I/O error

Cause

ossfs runs correctly and mounts the OSS bucket, but returns an error when processing a POSIX command such as chmod, read, or open.

Solution

Step 1: Get the original ossfs startup command

Because ossfs is already running, retrieve the startup command directly from the node:

ps -aux | grep ossfs | grep <PV_NAME>

Expected output:

root     2097450  0.0  0.2 124268 33900 ?        Ssl  20:47   0:00 /usr/local/bin/ossfs <BUCKET> /<PATH> /var/lib/kubelet/pods/<POD_UID>/volumes/kubernetes.io~csi/<PV_NAME>/mount -ourl=oss-cn-beijing-internal.aliyuncs.com -o allow_other

Replace the space after <BUCKET> with a colon. The original startup command is:

/usr/local/bin/ossfs <BUCKET>:/<PATH> /var/lib/kubelet/pods/<POD_UID>/volumes/kubernetes.io~csi/<PV_NAME>/mount -ourl=oss-cn-beijing-internal.aliyuncs.com -o allow_other

Step 2: Mount ossfs in the foreground and get debug logs

  1. Check whether the original command includes -o allow_other. If not, add it when creating the PV. For details, see Mount a bucket and Use a statically provisioned ossfs 1.0 volume.

  2. Run ossfs in the foreground with debug logging:

    Parameter Description
    -f Runs ossfs in the foreground instead of as a daemon. Logs are written to the terminal.
    -o allow_other Grants other users access to the mounted directory, preventing mount point permission issues.
    -o dbglevel=debug Sets the log level to debug.
    -o curldbg Enables libcurl logging to surface HTTP errors from OSS.
    mkdir /test && /usr/local/bin/ossfs <BUCKET>:/<PATH> /test -ourl=oss-cn-beijing-internal.aliyuncs.com -f -o allow_other -o dbglevel=debug -o curldbg

Step 3: Analyze the debug logs

Open a second terminal and rerun the failing command to capture new log output.

Example: ossfs error — chmod on mount point

chmod -R 777 /test
[ERROR] Jul 10 2023:13:03:18:/tmp/ossfs/src/ossfs.cpp:ossfs_chmod(1742): Could not change mode for mount point.

Changing the permissions of the mount point is not supported. For workarounds, see ossfs 1.0 volume FAQ.

Example: OSS server error — object not found

[INFO]       Aug 23 2022:11:54:11:/tmp/ossfs/src/curl.cpp:RequestPerform(2377): HTTP response code 404 was returned, returning ENOENT, Body Text: <?xml version="1.0" encoding="UTF-8"?>
<Error>
  <Code>NoSuchKey</Code>
  <Message>The specified key does not exist.</Message>
  <RequestId>xxxx</RequestId>
  <HostId><BUCKET>.oss-cn-beijing-internal.aliyuncs.com</HostId>
  <Key><object-name></Key>
  <EC>0026-00000001</EC>
</Error>

The object does not exist on OSS. For causes and solutions, see NoSuchKey.

Advanced debugging

Mount ossfs in the foreground (containerized ossfs)

If neither changing the PV mount parameters nor the global ConfigMap is an option, mount ossfs in the foreground on the node to capture full debug logs without recreating the PV.

Important

After ossfs was containerized, it is no longer installed on nodes by default. The version you install manually may differ from the version running in the pod. Try the PV parameter or ConfigMap approaches first.

  1. Install the latest version of ossfs.

  2. On the node, get the SHA-256 hash of the PV name:

    echo -n "<PV_NAME>" | sha256sum

    If sha256sum is unavailable:

    echo -n "<PV_NAME>" | openssl sha256 -binary | xxd -p -c 256

    For pv-oss, the expected output is:

    8f3e75e1af90a7dcc66882ec1544cb5c7c32c82c2b56b25a821ac77cea60a928
  3. Retrieve the ossfs mount parameters:

    ps -aux | grep <sha256-value>
  4. Create the authentication file. Delete it immediately after debugging.

    mkdir -p /etc/ossfs && echo "<bucket-name>:<akId>:<akSecret>" > /etc/ossfs/passwd-ossfs && chmod 600 /etc/ossfs/passwd-ossfs

Troubleshoot segmentation faults

If the error log contains "ossfs exited with error" err="signal: segmentation fault (core dumped)", the ossfs process crashed due to a segmentation fault.

To help technical support diagnose the issue, collect the core dump file and submit a ticket.

  1. Log in to the node and list crash records:

    coredumpctl list

    Expected output:

    TIME                          PID       UID   GID  SIG   COREFILE   EXE
    Mon 2025-11-17 11:21:44 CST   2108767   0     0    1     present    /usr/bin/xxx
    Tue 2025-11-18 19:35:58 CST   493791    0     0    1     present    /usr/local/bin/ossfs

    Identify the ossfs crash record by checking TIME and EXE, then note its PID. In the example above, the PID is 493791.

  2. Export the core dump file:

    # Replace <PID> with the actual PID
    coredumpctl dump <PID> --output ossfs.dump
  3. Submit a ticketSubmit a ticketSubmit a ticket and attach the ossfs.dump file.