All Products
Search
Document Center

Elastic Compute Service:Test the performance of block storage devices

Last Updated:Oct 25, 2023

This topic describes the common commands that are used by the fio tool on a Linux Elastic Compute Service (ECS) instance to test the performance of block storage devices. Block storage devices include cloud disks and local disks, and the performance metrics of these disks include IOPS, throughput, and latency.

Prerequisites

Block storage devices are created and attached to a Linux ECS instance.

Note

If you want to test the performance of block storage devices that belong only to a specific disk category, we recommend that you use new pay-as-you-go data disks. You can release the disks after the test is complete.

Background information

You can use other tools to test the performance of block storage devices, but you may obtain different baseline performance. For example, tools such as dd, sysbench, and iometer may be affected by test parameters and file systems and return inaccurate results. The performance results in this topic are obtained from a test that is performed on a Linux ECS instance by using the fio tool. These results are used as performance references for block storage devices. We recommend that you use the fio tool to test the performance of block storage devices for both Linux and Windows instances.

Usage notes

Warning
  • You can obtain accurate test results by testing raw disk partitions. However, you may destroy the file system structure in a raw disk partition if you directly test the partition. Before you test a raw disk, we recommend that you back up your data by taking a snapshot of the disk. For more information, see Create a snapshot for a disk.
  • We recommend that you do not test a disk in which the operating system is located or a disk that stores important data. To prevent data loss, we recommend that you use a new Elastic Compute Service (ECS) instance that does not contain data for the test.

Procedure

  1. Connect to an ECS instance.
    For more information, see Connect to an instance by using VNC.
  2. Before you test a block storage device, make sure that the device is 4 KiB aligned.

    sudo fdisk -lu

    If the value of Start in the command output is divisible by 8, the device is 4 KiB aligned. Otherwise, perform 4 KiB alignment before you proceed with the test.

    Device     Boot Start      End  Sectors Size Id Type
    /dev/vda1  *     2048 83886046 83883999  40G 83 Linux
  3. Run the following commands in sequence to install libaio and FIO:
    sudo yum install libaio -y
    sudo yum install libaio-devel -y
    sudo yum install fio -y
  4. Run the following command to switch the path:
    cd /tmp
  5. Run the test commands. For information about the commands, see the following sections:

Commands used to test the performance of cloud disks

Note
  • In this example, /dev/your_device is used as the device name of a cloud disk. Replace it with the actual device name. For example, if the device name of the cloud disk that you want to test is /dev/vdb, replace /dev/your_device with /dev/vdb in the following sample commands.

  • The values of other parameters in the sample commands are for reference only. Replace them with actual values.

  • Run the following command to test the random write IOPS of a cloud disk:

    fio -direct=1 -iodepth=128 -rw=randwrite -ioengine=libaio -bs=4k -size=1G -numjobs=1 -runtime=1000 -group_reporting -filename=/dev/your_device -name=Rand_Write_Testing
  • Run the following command to test the random read IOPS of a cloud disk:

    fio -direct=1 -iodepth=128 -rw=randread -ioengine=libaio -bs=4k -size=1G -numjobs=1 -runtime=1000 -group_reporting -filename=/dev/your_device -name=Rand_Read_Testing
  • Run the following command to test the sequential write throughput of a cloud disk:

    fio -direct=1 -iodepth=64 -rw=write -ioengine=libaio -bs=1024k -size=1G -numjobs=1 -runtime=1000 -group_reporting -filename=/dev/your_device -name=Write_PPS_Testing
  • Run the following command to test the sequential read throughput of a cloud disk:

    fio -direct=1 -iodepth=64 -rw=read -ioengine=libaio -bs=1024k -size=1G -numjobs=1 -runtime=1000 -group_reporting -filename=/dev/your_device -name=Read_PPS_Testing
  • Run the following command to test the random write latency of a cloud disk:

    fio -direct=1 -iodepth=1 -rw=randwrite -ioengine=libaio -bs=4k -size=1G -numjobs=1 -group_reporting -filename=/dev/your_device -name=Rand_Write_Latency_Testing
  • Run the following command to test the random read latency of a cloud disk:

    fio -direct=1 -iodepth=1 -rw=randread -ioengine=libaio -bs=4k -size=1G -numjobs=1 -group_reporting -filename=/dev/your_device -name=Rand_Read_Latency_Testing

For more information, see Test the IOPS performance of an ESSD.

Commands used to test the performance of local disks

The following sample commands are applicable to local Non-Volatile Memory Express (NVMe) SSDs and local Serial Advanced Technology Attachment (SATA) HDDs.

Note
  • In this example, /dev/your_device is used as the device name of a cloud disk. Replace it with the actual device name. For example, if the device name of the cloud disk that you want to test is /dev/vdb, replace /dev/your_device with /dev/vdb in the following sample commands.

  • The values of other parameters in the sample commands are for reference only. Replace them with actual values.

  • Run the following command to test the random write IOPS of a local disk:

    fio -direct=1 -iodepth=32 -rw=randwrite -ioengine=libaio -bs=4k -numjobs=4 -time_based=1 -runtime=1000 -group_reporting -filename=/dev/your_device -name=test
  • Run the following command to test the random read IOPS of a local disk:

    fio -direct=1 -iodepth=32 -rw=randread -ioengine=libaio -bs=4k -numjobs=4 -time_based=1 -runtime=1000 -group_reporting -filename=/dev/your_device -name=test
  • Run the following command to test the sequential write throughput of a local disk:

    fio -direct=1 -iodepth=128 -rw=write -ioengine=libaio -bs=128k -numjobs=1 -time_based=1 -runtime=1000 -group_reporting -filename=/dev/your_device -name=test
  • Run the following command to test the sequential read throughput of a local disk:

    fio -direct=1 -iodepth=128 -rw=read -ioengine=libaio -bs=128k -numjobs=1 -time_based=1 -runtime=1000 -group_reporting -filename=/dev/your_device -name=test
  • Run the following command to test the random write latency of a local disk:

    fio -direct=1 -iodepth=1 -rw=randwrite -ioengine=libaio -bs=4k -numjobs=1 -time_based=1 -runtime=1000 -group_reporting -filename=/dev/your_device -name=test
  • Run the following command to test the random read latency of a local disk:

    fio -direct=1 -iodepth=1 -rw=randread -ioengine=libaio -bs=4k -numjobs=1 -time_based=1 -runtime=1000 -group_reporting -filename=/dev/your_device -name=test
  • Run the following command to test the sequential write latency of a local disk:

    fio -direct=1 -iodepth=1 -rw=write -ioengine=libaio -bs=4k -numjobs=1 -time_based=1 -runtime=1000 -group_reporting -filename=/dev/your_device -name=test
  • Run the following command to test the sequential read latency of a local disk:

    fio -direct=1 -iodepth=1 -rw=read -ioengine=libaio -bs=4k -numjobs=1 -time_based=1 -runtime=1000 -group_reporting -filename=/dev/your_device -name=test

fio parameters

The following table describes the parameters in the preceding fio commands that are used to test disk performance.

Parameter

Description

direct

Specifies whether to use direct I/O. Default value: 1. Valid values:

  • 1: uses direct I/O, ignores the I/O buffer, and directly writes data.

  • 0: does not use direct I/O, ignore the I/O buffer, or directly write data.

iodepth

The I/O queue depth during the test. For example, if you set the -iodepth parameter to 128, the maximum number of I/O operations that can run in parallel for a fio control request is 128.

rw

The read/write policy that is used during the test. Valid values:

  • randwrite: random writes

  • randread: random reads

  • read: sequential reads

  • write: sequential writes

  • randrw: random reads and writes

ioengine

The I/O engine that fio uses to test disk performance. In most cases, libaio is used. For information about other available I/O engines, see the fio documentation.

bs

The block size of I/O units. Default value: 4k, which indicates 4 KiB. Values for reads and writes can be specified in the <Value for reads>,<value for writes> format. If you do not specify a value, the default value is used.

size

The size of the test files.

fio ends the test only after the specified size of the files is read or written, unless limited by specific factors, such as runtime. If this parameter is not specified, fio uses the size of all given files or devices. The valid values can also be a percent that ranges from 1% to 100%. For example, if the size parameter is set to 20%, fio uses 20% of the size of all given files or devices.

numjobs

The number of concurrent threads that are used during the test. Default value: 1. Valid values:

runtime

The duration of the test, which indicates the period of time for which fio runs.

If this parameter is not specified, the test does not end until the files whose size is specified by the size parameter are read or written in the block size specified by the bs parameter.

group_reporting

The display mode of the test results.

If this parameter is specified, per-process statistics instead of per-task statistics are displayed.

filename

The path of the object that you want to test. The path can be the device name of the disk or a file address. In this topic, the test object of fio is an entire disk that does not have file systems (a raw disk). To prevent the data of other disks from being damaged, replace /dev/your_device in the preceding commands with your actual path.

name

The name of the test. You can specify the parameter based on your needs. In the preceding examples, Rand_Write_Testing is used.

For more information about the parameters, see fio(1) - Linux man page.