This topic describes the performance of ossfs 2.0 in different scenarios, including file read and write speeds and performance in concurrent scenarios. This information provides accurate performance references to help you better select and use ossfs 2.0 for your business operations.
Test environment
Hardware
Instance type: ecs.g7.32xlarge
vCPU: 128 vCPUs
Memory: 512 GiB
Software
Operating system: Alibaba Cloud Linux 3.2104 LTS 64-bit
Kernel version: 5.10.134-18.al8.x86_64
ossfs versions: ossfs 2.0.0beta, ossfs 1.91.4
Mount configuration
Assume that the mount path of the OSS volume in the container is /mnt/oss.
ossfs 2.0
In this test, add the
otherOptsparameter to specify the multipart upload size as 33554432 bytes.upload_buffer_size=33554432ossfs 1.0
In this test, add the
otherOptsparameter to enable direct read mode and cache optimization.-o direct_read -o readdir_optimize
Test scenarios
After buckets are mounted using ossfs 2.0.0beta and ossfs 1.91.4, flexible I/O tester (FIO) is used to test the basic read and write capabilities of ossfs 2.0 and ossfs 1.0. The test scenarios and results are as follows.
Directly write 100 GB of data in sequence using a single thread
The write performance of ossfs 1.0 is limited by the disk performance.
Test command
Use FIO to run a single-thread test task named file-100G to directly write 100 GB of data with a part size of 1 MB to the /mnt/oss/fio_direct_write directory and output the test results.
fio --name=file-100G --ioengine=libaio --rw=write --bs=1M --size=100G --numjobs=1 --direct=1 --directory=/mnt/oss/fio_direct_write --group_reportingTest results
ossfs version
Bandwidth
CPU core usage (the full capacity of a core is 100%)
Peak memory
ossfs 2.0
2.2 GB/s
207%
2,167 MB
ossfs 1.0
118 MB/s
5%
15 MB
Read 100 GB of data in sequence using a single thread
Test command
After clearing the page cache, use FIO to read 100 GB of data with a part size of 1 MB from the /mnt/oss/fio_direct_write directory in sequence using a single thread and output the test results.
echo 1 > /proc/sys/vm/drop_caches fio --name=file-100G --ioengine=libaio --direct=1 --rw=read --bs=1M --directory=/mnt/oss/fio_direct_write --group_reporting --numjobs=1Test results
ossfs version
Bandwidth
CPU core usage (the full capacity of a core is 100%)
Peak memory
ossfs 2.0
3.0 GB/s
378%
1,617 MB
ossfs 1.0
355 MB/s
50%
400 MB
Read 100 GB of data in sequence using multiple threads
Generate test files
Create 4 files, with 100 GB each, in the
/mnt/oss/fiomount directory.fio --name=file-100g --ioengine=libaio --direct=1 --iodepth=1 --numjobs=4 --nrfiles=1 --rw=write --bs=1M --size=100G --group_reporting --thread --directory=/mnt/oss/fioTest command
After clearing the page cache, use FIO to concurrently read the 4 created files with a part size of 1 MB in the
/mnt/oss/fiodirectory in 30 seconds using 4 threads and output the results.echo 1 > /proc/sys/vm/drop_caches fio --name=file-100g --ioengine=libaio --direct=1 --iodepth=1 --numjobs=4 --nrfiles=1 --rw=read --bs=1M --size=100G --group_reporting --thread --directory=/mnt/oss/fio --time_based --runtime=30Test results
ossfs version
Bandwidth
CPU core usage (the full capacity of a core is 100%)
Peak memory
ossfs 2.0
7.1 GB/s
1,187%
6.2 GB
ossfs 1.0
1.4 GB/s
210%
1.6 GB
Concurrently read 100,000 files with 128 KB each using 128 threads
Object Storage Service (OSS) provides up to 10,000 queries per second (QPS) for each Alibaba Cloud account. For more information, see QPS. To achieve the desired performance in the test, make sure that the QPS for your Alibaba Cloud account is not used by other business.
Test procedure
Create a Go program named
rw-bench.go.This program has the following core features: 1. It can concurrently create multiple files with the same size in the destination file directory. 2. It can concurrently read all files in the destination file directory, assign the files to n threads for reading, and record the bandwidth data.
Compile the
rw-bench.goprogram file.go build rw-bench.goCreate 100,000 files with 128 KB each in the OSS bucket directory mounted to the local file system.
mkdir -p <The path of the mounted test file> && ./rw-bench --dir <The path of the mounted test file> --file-size-KB 128 --file-count 100000 --writeClear the page cache and execute the program. After performing the test for 5 times in a row, use the test data obtained from the test with stable latency on the server.
echo 1 > /proc/sys/vm/drop_caches ./rw-bench --dir <The path of the mounted test file> --threads 128
Test results
ossfs version
Bandwidth
CPU core usage (the full capacity of a core is 100%)
Peak memory
ossfs 2.0
1 GB/s
247%
212 MB
ossfs 1.0
3.5 MB/s
3%
200 MB