This topic describes how to use the fio tool on a Linux Elastic Compute Service (ECS) instance to test the performance of block storage devices. Block storage devices include cloud disks and local disks, and the performance metrics of these disks include IOPS, throughput, and latency.
Prerequisites
A block storage device is created and attached to a Linux ECS instance.Background information
You can use other tools to test the performance of block storage devices, but you may obtain different baseline performance. For example, tools such as dd, sysbench, and iometer may be affected by test parameters and file systems and return inaccurate results. The performance results in this topic are obtained from a test that is performed on a Linux ECS instance by using the fio tool. These results are used as performance references for block storage devices. We recommend that you use the fio tool to test the performance of block storage devices for both Linux and Windows instances.
Usage notes
- You can obtain accurate test results by testing raw disk partitions. However, you may destroy the file system structure in a raw disk partition if you directly test the partition. Before you test a raw disk, we recommend that you back up your data by taking a snapshot of the disk. For more information, see Create a snapshot of a disk.
- We recommend that you do not test a disk in which the operating system is located or a disk that stores important data. To prevent data loss, we recommend that you use a new Elastic Compute Service (ECS) instance that does not contain data for the test.
Procedure
- Connect to an ECS instance. For more information, see Connect to a Linux instance by using a password.
- Before you test a block storage device, make sure that the device is 4 KiB aligned.
sudo fdisk -lu
If the value of Start in the command output is divisible by 8, the device is 4 KiB aligned. Otherwise, perform 4 KiB alignment before you proceed with the test.Device Boot Start End Sectors Size Id Type /dev/vda1 * 2048 83886046 83883999 40G 83 Linux
- Run the following commands in sequence to install libaio and FIO:
sudo yum install libaio -y sudo yum install libaio-devel -y sudo yum install fio -y
- Run the following command to switch the path:
cd /tmp
- Run the test commands. For more information about the commands, see the following sections:
- For information about commands used to test the performance of cloud disks, see Commands used to test the performance of cloud disks.
- For information about commands used to test the performance of local disks, see Commands used to test the performance of local disks.
Commands used to test the performance of cloud disks
For information about how to test the IOPS of an enhanced SSD (ESSD), see Test the IOPS performance of an ESSD.
- In this example, the device name of the disk is /dev/your_device. Replace it with your actual device name. For example, if the device name of the cloud disk that you want to test is /dev/vdb, replace /dev/your_device with /dev/vdb in the following commands.
- The values of other parameters in the following sample commands are for reference only. Replace them based on your actual requirements.
- Run the following command to test the random write IOPS of a cloud disk:
fio -direct=1 -iodepth=128 -rw=randwrite -ioengine=libaio -bs=4k -size=1G -numjobs=1 -runtime=1000 -group_reporting -filename=/dev/your_device -name=Rand_Write_Testing
- Run the following command to test the random read IOPS of a cloud disk:
fio -direct=1 -iodepth=128 -rw=randread -ioengine=libaio -bs=4k -size=1G -numjobs=1 -runtime=1000 -group_reporting -filename=/dev/your_device -name=Rand_Read_Testing
- Run the following command to test the sequential write throughput of a cloud disk:
fio -direct=1 -iodepth=64 -rw=write -ioengine=libaio -bs=1024k -size=1G -numjobs=1 -runtime=1000 -group_reporting -filename=/dev/your_device -name=Write_PPS_Testing
- Run the following command to test the sequential read throughput of a cloud disk:
fio -direct=1 -iodepth=64 -rw=read -ioengine=libaio -bs=1024k -size=1G -numjobs=1 -runtime=1000 -group_reporting -filename=/dev/your_device -name=Read_PPS_Testing
- Run the following command to test the random write latency of a cloud disk:
fio -direct=1 -iodepth=1 -rw=randwrite -ioengine=libaio -bs=4k -size=1G -numjobs=1 -group_reporting -filename=/dev/your_device -name=Rand_Write_Latency_Testing
- Run the following command to test the random read latency of a cloud disk:
fio -direct=1 -iodepth=1 -rw=randread -ioengine=libaio -bs=4k -size=1G -numjobs=1 -group_reporting -filename=/dev/your_device -name=Rand_Read_Latency_Testing
Commands used to test the performance of local disks
The following test commands are applicable to local Non-Volatile Memory Express (NVMe) SSDs and local Serial Advanced Technology Attachment (SATA) HDDs.
- In this example, the device name of the disk is /dev/your_device. Replace it with your actual device name. For example, if the device name of the cloud disk that you want to test is /dev/vdb, replace /dev/your_device with /dev/vdb in the following commands.
- The values of other parameters in the following sample commands are for reference only. Replace them based on your actual requirements.
- Run the following command to test the random write IOPS of a local disk:
fio -direct=1 -iodepth=32 -rw=randwrite -ioengine=libaio -bs=4k -numjobs=4 -time_based=1 -runtime=1000 -group_reporting -filename=/dev/your_device -name=test
- Run the following command to test the random read IOPS of a local disk:
fio -direct=1 -iodepth=32 -rw=randread -ioengine=libaio -bs=4k -numjobs=4 -time_based=1 -runtime=1000 -group_reporting -filename=/dev/your_device -name=test
- Run the following command to test the sequential write throughput of a local disk:
fio -direct=1 -iodepth=128 -rw=write -ioengine=libaio -bs=128k -numjobs=1 -time_based=1 -runtime=1000 -group_reporting -filename=/dev/your_device -name=test
- Run the following command to test the sequential read throughput of a local disk:
fio -direct=1 -iodepth=128 -rw=read -ioengine=libaio -bs=128k -numjobs=1 -time_based=1 -runtime=1000 -group_reporting -filename=/dev/your_device -name=test
- Run the following command to test the random write latency of a local disk:
fio -direct=1 -iodepth=1 -rw=randwrite -ioengine=libaio -bs=4k -numjobs=1 -time_based=1 -runtime=1000 -group_reporting -filename=/dev/your_device -name=test
- Run the following command to test the random read latency of a local disk:
fio -direct=1 -iodepth=1 -rw=randread -ioengine=libaio -bs=4k -numjobs=1 -time_based=1 -runtime=1000 -group_reporting -filename=/dev/your_device -name=test
- Run the following command to test the sequential write latency of a local disk:
fio -direct=1 -iodepth=1 -rw=write -ioengine=libaio -bs=4k -numjobs=1 -time_based=1 -runtime=1000 -group_reporting -filename=/dev/your_device -name=test
- Run the following command to test the sequential read latency of a local disk:
fio -direct=1 -iodepth=1 -rw=read -ioengine=libaio -bs=4k -numjobs=1 -time_based=1 -runtime=1000 -group_reporting -filename=/dev/your_device -name=test
fio parameter settings
Parameter | Description |
---|---|
direct | Specifies whether to use direct I/O. Default value: 1. Valid values:
|
iodepth | The I/O queue depth during the test. For example, if the iodepth parameter is set to 128, the maximum number of I/O operations that can run in parallel for a fio control request is 128. |
rw | The read/write policy that is used during the test. Valid values:
|
ioengine | The I/O engine that fio uses to test disk performance. In most cases, libaio is used. For information about other available I/O engines, see the fio documentation. |
bs | The block size of I/O units. Default value: 4k, which indicates 4 KiB. Values for reads and writes can be separately specified in the read,write format. If you do not specify a value, the default value is used. |
size | The size of the test files. fio ends the test only after the specified size of the files is read or written, unless limited by specific factors such as runtime. If the parameter is not specified, fio uses the size of all given files or devices. The valid values can also be a percent that ranges from 1% to 100%. For example, if the size parameter is set to 20%, fio uses 20% of the size of all given files or devices. |
numjobs | The number of concurrent threads that are used during the test. Default value: 1. |
runtime | The duration of the test, which indicates the period of time for which fio runs. If the parameter is not specified, the test does not end until the file size specified by the size parameter is read or written in the block size specified by the bs parameter. |
group_reporting | The display mode of the test results. If this parameter is specified, per-process instead of per-task statistics are displayed. |
filename | The path of the object that you want to test. The path can be the device name of the disk or a file address. In this topic, the test object of fio is an entire disk that does not have file systems (a raw disk). To prevent the data of other disks from being damaged, replace /dev/your_device in the preceding commands with your actual path. |
name | The name of the test. You can specify the parameter based on your needs. In the preceding examples, Rand_Write_Testing is used. |