This test compares the performance of different versions of ossfs and the open-source tool goofys in various scenarios, including file read and write speeds and concurrent operations. The results provide a performance reference to help you select the right tool for your business.
Test environment
Hardware environment
Instance type: ecs.g9i.48xlarge
vCPU: 192 vCPUs
Memory: 768 GiB
Network bandwidth: 64 Gbps
Software environment
Operating system: Alibaba Cloud Linux 3.2104 LTS 64-bit
Kernel version: 5.10.134-18.al8.x86_64
Tool versions: ossfs 2.0.4, ossfs 1.91.8, and goofys 0.24.0
Mount configurations
The following examples show the mount options used for the performance test.
This test uses HTTPS domain names. In a trusted environment, you can mount with HTTP domain names. This method consumes fewer CPU resources for the same throughput.
ossfs 2.0.4
Mount configuration file (ossfs2.conf)
When the bucket is mounted, the upload part size is set to 33554432 bytes.
# The endpoint of the bucket's region --oss_endpoint=https://oss-cn-hangzhou-internal.aliyuncs.com # The bucket name --oss_bucket=bucket-test # The AccessKey ID and AccessKey secret --oss_access_key_id=yourAccessKeyID --oss_access_key_secret=yourAccessKeySecret # The upload part size, in bytes --upload_buffer_size=33554432Mount command
The following command uses the
ossfs2.confconfiguration file to mount thebucket-testbucket to the local/mnt/ossfs2/directory.ossfs2 mount /mnt/ossfs2/ -c /etc/ossfs2.conf
ossfs 1.91.8
The following command mounts the bucket-test bucket to the local /mnt/ossfs directory and enables direct read mode and cache optimization.
ossfs bucket-test /mnt/ossfs -ourl=https://oss-cn-hangzhou-internal.aliyuncs.com -odirect_read -oreaddir_optimizegoofys 0.24.0
The following command mounts the bucket-test bucket to the local /mnt/goofys directory.
goofys --endpoint https://oss-cn-hangzhou-internal.aliyuncs.com --subdomain bucket-test --stat-cache-ttl 60s --type-cache-ttl 60s /mnt/goofysTest scenarios
After a bucket was mounted using ossfs 2.0.4, ossfs 1.91.8, and goofys 0.24.0, the FIO test tool was used to evaluate the basic read and write capabilities of each tool. The test scenarios and results are described in the following sections.
Single-threaded sequential direct write of a 100 GB file
The write performance of ossfs 1.0 is limited by disk performance.
Test command
The following command uses the FIO tool to run a single-threaded direct write test named file-100G. It writes a total of 100 GB of data with a block size of 1 MB to the /mnt/oss/fio_direct_write directory and outputs the results.
fio --name=file-100G --ioengine=libaio --rw=write --bs=1M --size=100G --numjobs=1 --direct=1 --directory=/mnt/oss/fio_direct_write --group_reportingTest results
Tool
Bandwidth
CPU core utilization (100% for a single fully loaded core)
Peak memory
ossfs 2.0
2.2 GB/s
207%
2167 MB
ossfs 1.0
118 MB/s
5%
15 MB
goofys
450 MB/s
250%
7.5 GB
Single-threaded sequential read of a 100 GB file
Test command
The following command first clears the system page cache. Then, it uses the FIO tool to run a single-threaded sequential read test on the 100 GB file in the /mnt/oss/fio_direct_write directory. The test uses a block size of 1 MB and outputs the results.
echo 1 > /proc/sys/vm/drop_caches fio --name=file-100G --ioengine=libaio --direct=1 --rw=read --bs=1M --directory=/mnt/oss/fio_direct_write --group_reporting --numjobs=1Test results
Test tool
Bandwidth
CPU core utilization (100% for a single fully loaded core)
Peak memory
ossfs 2.0
4.3 GB/s
610%
1629 MB
ossfs 1.0
1.0 GB/s
530%
260 MB
goofys
1.3 GB/s
270%
976 MB
Multi-threaded sequential read of 100 GB files
Generate test files
The following command creates four 100 GB files in the
/mnt/oss/fiomount directory for the multi-threaded concurrency test.fio --name=file-100g --ioengine=libaio --direct=1 --iodepth=1 --numjobs=4 --nrfiles=1 --rw=write --bs=1M --size=100G --group_reporting --thread --directory=/mnt/oss/fioTest command
The following command first clears the system page cache. Then, it uses the FIO tool to run a 30-second read test with four concurrent threads on the four 100 GB files in the
/mnt/oss/fiodirectory. The test uses a block size of 1 MB and outputs the results.echo 1 > /proc/sys/vm/drop_caches fio --name=file-100g --ioengine=libaio --direct=1 --iodepth=1 --numjobs=4 --nrfiles=1 --rw=read --bs=1M --size=100G --group_reporting --thread --directory=/mnt/oss/fio --time_based --runtime=30Test results
Tool
Bandwidth
CPU core utilization (100% for a single fully loaded core)
Peak memory
ossfs 2.0
7.4 GB/s
890%
6.2 GB
ossfs 1.0
1.8 GB/s
739%
735 MB
goofys
2.8 GB/s
7800%
2.7 GB
Concurrent read of 100,000 128 KB files with 128 threads
By default, OSS has a 10,000 queries per second (QPS) limit. To achieve the performance metrics shown in the test results, ensure that other services do not consume the QPS of the test account.
Steps
Create a Go program named
rw-bench.go.This program has two main functions: concurrently creating multiple files of the same size in a target directory, and concurrently reading all files in a target directory. During the read operation, the program assigns files to a specified number of threads and records the final bandwidth.
Compile the
rw-bench.goprogram file.go build rw-bench.goUse the following command to create 100,000 files, each 128 KB in size, in the local directory where the OSS bucket is mounted.
mkdir -p <path_to_mounted_test_directory> && ./rw-bench --dir <path_to_mounted_test_directory> --file-size-KB 128 --file-count 100000 --writeClear the system page cache and run the program. The test is run five consecutive times. After the server-side latency stabilizes, the steady-state test data is recorded.
echo 1 > /proc/sys/vm/drop_caches ./rw-bench --dir <path_to_mounted_test_directory> --threads 128
Test results
Tool
Bandwidth
CPU core utilization (100% for a single fully loaded core)
Peak memory
ossfs 2.0
1 GB/s
247%
176 MB
ossfs 1.0
45 MB/s
25%
412 MB
goofys
1 GB/s
750%
1.3 GB