All Products
Search
Document Center

How to test the performance of ESSD disks

Last Updated: Apr 02, 2019

An ESSD disk is an ultra-high performance cloud disk product that has been newly launched by Alibaba Cloud. Both the cloud disk itself and the performance test environments influence the test result. Therefore, this tutorial describes how to configure appropriate environments to test the performance of the ESSD disk and how the IOPS of 1 million is tested.

Note:
You can obtain the accurate test results by testing a bare-metal disk, but you may destroy the file system structure by testing it directly. Back up your data before the test. For example, create snapshots. We recommend that you test the storage performance only on newly purchased ECS instances that contain no data. Otherwise, you may lose data.

Preparations

Follow the instructions to fully use the performance of multi-core processors and concurrency. See how the IOPS of 1 million is tested.

You can use the latest versions of Linux images in Alibaba Cloud official images. Such as CentOS 7.4/7.3/7.2 (64Bit) and AliyunLinux 17.1 (64Bit). We recommend that you do not use previous versions of Linux or Windows images because certain required drivers may no longer be available.

You can use FIO to test the performance of the disks.

Examples

Assume that the instance type is ecs.g5se.18xlarge and the device name of the ESSD disk is /dev/vdb. This example describes how to test the random write (randwrite) performance of the ESSD disk.

  1. Connect to the ECS instance and log on to the Linux instance.

  2. Run the following commands to install libaio and FIO.

    1. sudo yum install libaio y
    2. sudo yum install libaio-devel y
    3. sudo yum install fio -y
  3. Run the command cd /tmp to change the directory.

  4. Run the command vim test100w.sh to create a script. Copy and paste the following code. This script contains the sample code to test the random write randwrite IOPS.

    1. function RunFio
    2. {
    3. numjobs=$1 # The number of the tested threads. Such as 8 in this example.
    4. iodepth=$2 # The Maximum of concurrent I/O requests. Such as 64 in this example.
    5. bs=$3 # The size of the data block for one I/O. Such as 4k in this example.
    6. rw=$4 # The read and write policy. Such as randwrite in this example.
    7. filename=$5 # The name of the tested file. Such as /dev/vdb in this example.
    8. nr_cpus=`cat /proc/cpuinfo |grep "processor" |wc -l`
    9. if [ $nr_cpus -lt $numjobs ];then
    10. echo Numjobs is more than cpu cores, exit!”
    11. exit -1
    12. fi
    13. let nu=$numjobs+1
    14. cpulist=""
    15. for ((i=1;i<10;i++))
    16. do
    17. list=`cat /sys/block/vdb/mq/*/cpu_list | awk '{if(i<=NF) print $i;}' i="$i" | tr -d ',' | tr '\n' ','`
    18. if [ -z $list ];then
    19. break
    20. fi
    21. cpulist=${cpulist}${list}
    22. done
    23. spincpu=`echo $cpulist | cut -d ',' -f 2-${nu}`
    24. echo $spincpu
    25. fio --ioengine=libaio --runtime=30s --numjobs=${numjobs} --iodepth=${iodepth} --bs=${bs} --rw=${rw} --filename=${filename} --time_based=1 --direct=1 --name=test --group_reporting --cpus_allowed=$spincpu --cpus_allowed_policy=split
    26. }
    27. echo 2 > /sys/block/vdb/queue/rq_affinity
    28. sleep 5
    29. RunFio 8 64 4k randwrite /dev/vdb

    Note:

    • You must modify the following commands according to the test environment.
      • vdb in the following command line:
        1. list=`cat /sys/block/vdb/mq/*/cpu_list | awk '{if(i<=NF) print $i;}' i="$i" | tr -d ',' | tr '\n' ','`
      • 8644krandwrite and /dev/vdb in the following command line:
        1. RunFio 8 64 4k randwrite /dev/vdb
    • You may destroy the file system structure by testing the raw disk directly. If you can accept losing data, set filename=[device name,such as /dev/vdb]. Otherwise, set filename=[file path, such as /mnt/test.image].
  5. Run sh test100w.sh to start testing the performance of the ESSD disk.

    Test

Script explanation

Block device parameter

The command echo 2 > /sys/block/vdb/queue/rq_affinity in script test100w.sh is used to set the value of the rq_affinity parameter to 2:

  • If the value of the rq_affinity parameter is 1, the block device migrates the I/O Completions to the vCPU group that originally submitted the request. I/O Completions may run on the same vCPU in the case of concurrent processing I/O requests by multiple threads. It may be a performance bottleneck.

  • If the value of the rq_affinity parameter is 2, the I/O Completions are forced to run on the requesting vCPU. Performance of each vCPU is fully used in the case of concurrent processing I/O requests by multiple threads.

Bind the threads to the cores of the vCPUs

  • Generally, a device only has one Request-Queue. This unique Request-Queue is a performance bottleneck in the case of concurrent processing I/O requests by multiple threads.

  • In the latest Multi-Queue mode, a device can have multiple Request-Queues that process I/O requests, which can fully use the performance of back-end storage. If you have four I/O threads, you need to bind them to the CPU cores that are corresponding to the Request-Queues. This allows you to fully use the Multi-Queue mode to improve the performance.

To fully use the performance of the device, you need to assign the I/O requests to different Request-Queues. The command fio --ioengine=libaio --runtime=30s --numjobs=${numjobs} --iodepth=${iodepth} --bs=${bs} --rw=${rw} --filename=${filename} --time_based=1 --direct=1 --name=test --group_reporting --cpus_allowed=$spincpu --cpus_allowed_policy=split in test100w.sh is used to bind jobs to different CPU cores. vd* is the device name of your ESSD disk. For example, /dev/vdb.

FIO provides the cpus_allowed parameter and the cpus_allowed_policy parameter to bind vCPU. The command above runs multiple jobs. They are bound to different CPU cores and are corresponding to different Queue_Ids.

To check the cpu_core_id that is corresponding to the Queue_Id, follow the instructions as follows:

  • Run the command ls /sys/block/vd*/mq/ to check the Queue_Id of the ESSD disk with the device name starting with vd, For example, vdb.
  • Run the command cat /sys/block/vd*/mq/*/cpu_list to check the corresponding cpu_core_id to the Queue_Id.

References