×
Community Blog Benchmarking Alibaba Cloud Object Storage Service (OSS) with ab and wrk Tools

Benchmarking Alibaba Cloud Object Storage Service (OSS) with ab and wrk Tools

This article shows you how you can use ab and wrk to measure and analyze OSS’ performance in a single-server environment, and describes the method to send requests to OSS endpoints.

As more and more people come to use Alibaba Cloud Object Storage Service (OSS), the question of OSS’ performance has become increasingly common. In our discussion, performance refers to Queries Per Second (QPS) and the latency of each processing request. One can say that testing the performance of OSS is quite a broad topic.

From the user's perspective, OSS’ performance is related to the type of stress (synchronous or asynchronous), the size of the requested object, and the request method (read or write). From the perspective of the OSS service, performance is related to the server model (disks, NICs, memories, and CPUs), the number of servers, and the network and load of the entire cluster.

In this blog, we will discuss how you can use the ab and wrk tools in a single-server environment and the method to send requests to OSS endpoints. Additionally, we will also see how you can calculate the request QPS and latency on the client’s side.
Let us begin by understanding the tools, which we will be using to calculate OSS’ performance.

How to Use the ab Tool

Apache Benchmark or ab is a tool officially launched by Apache used to test the performance of Apache servers. The tool views the service capacity provided by the installed Apache servers and the number of requests processed per second.

Refer to the following links to download and installing ab.

httpd.apache.org/docs/2.4/install.html httpd.apache.org/docs/2.4/programs/ab.html

Usage

As OSS buckets require access permissions, and ab does not support OSS signatures, set the buckets to the public-read-write permission for testing.

Assuming that we simulate 10 concurrent requests for a 100 KB object, common parameters are as follows:

  1. c: It represents the total number of requests at a time, with a default of one request at a time.
  2. k: It remains keep-alive enable keep-alive, HTTP allows multiple requests and is disabled by default.
  3. n Request Number: It refers to the number of requests to be issued throughout the benchmark test. By default, it is one instance. A point worth remembering is that the performance parameters in default mode are not representative.
  4. t Longest Time: It refers to the maximum time for the benchmark test. There is no default limit.
  5. u Upload Files: It represents the File containing data to PUT. Remember to also set -T.-T content-type
  6. T: It lets you set the content-type for the file to be uploaded. For example: application/x-www-form-urlencoded. The default value is text/plain.

Below is an example that should make things more apparent.

Example

Test the performance of OSS in high-concurrency reads/writes to small files.

Requirements: You will need a bucket with public-read-write permission. You can name it public. The ab tool (open-source) should be running in a Linux environment. Alibaba Cloud OSS provides the service endpoint, for example oss-cn-hangzhou-test.alibabacloudcs.com. Prepare a file of 5KB, for example, a 5KB txt file.

Testing Process:

  1. Simulate writing to a small file (5KB) with high concurrency (50 threads). Run

    ./ab -c 50 -t 300 -T ‘text/plain’ -u 5KB.txt
    for 5 minutes.

  2. Simulate reading small file (5KB) with high concurrency (50 threads). Run

    ./ab -c 50 -t
    300 for 5 minutes.

Expected Results:

The test normally executes with 0 failed requests and requests per second value, which reflects the per-second processing capability of the client. However, this does not represent the processing capacity of the OSS server.

Important Notes

  1. Keep close watch on the CPU, memory, and network of the server running the ab tool and front-end server. Make sure that they do not exceed 75% of the maximum value.
  2. The test may fail because of insufficient ports.
  3. It is required to adjust kernel parameters to support port re-usage. For example: on Linux 1 sudo vim /etc/sysctl.conf 2. add the following content net.ipv4.tcp_syncookies = 1 net.ipv4.tcp_tw_reuse = 1 net.ipv4.tcp_tw_recycle = 1 net.ipv4.tcp_fin_timeout = 30 kernel.printk = 7 4 1 7 3. run sudo sysctl –p to activate

Results Analysis

$./ab -c 50 -t 60 -n 300000 -k http://oss-cn-hangzhou-test.aliyuncs.com/public/5KB.txt
This is ApacheBench, Version 2.3 <$Revision: 655654 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/

Benchmarking oss-cn-hangzhou-test.aliyuncs.com (be patient)
Completed 30000 requests
Completed 60000 requests
Completed 90000 requests
Completed 120000 requests
Completed 150000 requests
Completed 180000 requests
Completed 210000 requests
Completed 240000 requests
Finished 250137 requests

Server Software:        AliyunOSS
Server Hostname:        oss-cn-hangzhou-test.aliyuncs.com
Server Port:            80

Document Path:          /public/5KB.txt
Document Length:        5120 bytes

Concurrency Level:      50             Concurrency level
Time taken for tests:   60.000 seconds Test time
Complete requests:      250137         Total number of complete requests during the test
Failed requests:        0
Write errors:           0
Keep-Alive requests:    248492         Number of keep-alive requests
Total transferred:      1382504896 bytes
HTML transferred:       1280703929 bytes
Requests per second:    4168.94 [#/sec](mean)   Number of requests per second
Time per request:       11.993 [ms](mean)       Average latency for each request
Time per request:       0.240 [ms](mean, across all concurrent requests)
Transfer rate:          22501.67 [Kbytes/sec] received

Connection Times (ms)    Connection request time
              min  mean[+/-sd] median   max
Connect:        0    0   0.0      0       1
Processing:     1   12   7.6     12      87
Waiting:        1   12   7.6     12      87
Total:          1   12   7.6     12      87

Percentage of the requests served within a certain time (ms)
  50%     12
  66%     15
  75%     16
  80%     17
  90%     20
  95%     23
  98%     28
  99%     37
 100%     87 (longest request)

From the test results, we can see that:

  1. For cases where 50 concurrent requests run for 60 seconds, ab could handle 4,168 times/requests per second (that is to say, the QPS shown on the client side was 4,168 at this stress level).
  2. It processed each request with an average latency of about 12ms.
  3. Since we had enabled keep-alive, the connection took no time.
  4. ab handled 99% of requests within 37ms, and the longest request lasted for 87ms.

How to Use the wrk Tool

The wrk tool is for HTTP benchmark testing. It can be used to generate significant stress.

Refer to the link below to download and install wrk

https://github.com/wg/wrk

Usage

You can use the wrk tool in combination with Lua scripts for PUT operations.
Precondition &gt: To use wrk, you would require a bucket with the public-read-write permission. You can name it public. The wrk tool runs in a Linux environment. OSS provides the service endpoint, for example, oss-cn-hangzhou-test.alibabacloud.com. Prepare a file of 5KB, for example, 5KB.txt.

Upload

Here, wrk uses Lua script for uploading operations. The content of the Lua script put.lua is as follows:

    counter = 0
    request = function()
       mypath = "5KB.txt";
       local file = io.open(mypath, "r");
       assert(file);
       local body = file:read("*a");      -- read all content
       file:close();
       wrk.method = "PUT"
       wrk.body = body
       path = "/public/test-" .. mypath .. "-" .. counter
       wrk.headers["X-Counter"] = counter
       counter = counter + 1
       return wrk.format(nil, path)
    end
    done = function(summary, latency, requests)
       io.write("------------------------------\n")
       for _, p in pairs({ 50, 60, 90, 95, 99, 99.999 }) do
          n = latency:percentile(p)
          io.write(string.format("%g%%, %d ms\n", p, n/1000.0))
       end
    end
Execute Commands
$./wrk -c 50 -d 60 -t 5 -s put.lua http://oss-cn-hangzhou-test.aliyuncs.com
Initiates a PUT request to the endpoint, for the content specified in put.lua on 5 threads with 50 connections enabled and set to run for 60 seconds.

Test Results:

Running input
-c 50 -d 60 -t 5 -s put.lua http://oss-cn-hangzhou-test.aliyuncs.com
Running 1m test @ http://oss-cn-hangzhou-test.aliyuncs.com, test input http://oss-cn-hangzhou-test.aliyuncs.com
  5 threads and 50 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency    16.23ms    9.49ms 159.48ms   96.45%
    Req/Sec   635.38     98.38     0.91k    72.63%
  189072 requests in 1.00m, 48.73MB read
Requests/sec:   3151.10
Transfer/sec:    831.58KB
------------------------------
50%, 14 ms
60%, 15 ms
90%, 20 ms
95%, 23 ms
99%, 64 ms
99.999%, 159 ms

Result Analysis

From the test results, we can conclude the following:

  1. With 5 concurrent requests, and after enabling 50 connections the requests were run for 60 seconds. wrk could process 3151 requests on average per second (that is to say, under this stress, the QPS shown on the client was 3151).
  2. wrk processed each request with an average latency of about 16ms.
  3. It completed 99% within 64ms, with the longest request lasting for 159ms.

Download

Execute the Command:

$./wrk -c 50 -d 60 -t 5 http://oss-cn-hangzhou-test.aliyuncs.com/public/5KB.txt
Initiate a GET request to the endpoint, with 5 threads and 50 connections enabled. The request is run for 60 seconds.
Note: The 5KB.txt should exist here.

Test Results:

Running input
-c 50 -d 60 -t 5 http://oss-cn-hangzhou-test.aliyuncs.com/public/5KB.txt
Running 1m test @ http://oss-cn-hangzhou-test.aliyuncs.com/public/5KB.txt, test input http://oss-cn-hangzhou-test.aliyuncs.com/public/5KB.txt
  5 threads and 50 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency    12.72ms    5.14ms  62.68ms   80.14%
    Req/Sec   814.86    145.65     1.36k    69.43%
  241990 requests in 1.00m, 1.25GB read
Requests/sec:   4033.14
Transfer/sec:     21.26MB

Result Analysis

The test results revealed the following:

  1. With 5 concurrent requests, when we enabled 50 connections, and the ran requests for 60 seconds, wrk processed 4033 on average per second (that is to say, under this stress, the QPS shown on the client was 4033).
  2. It processed a request with an average latency of about 12ms.

Conclusion

The results mentioned in this blog revealed that the QPS and latency of Alibaba Cloud Object Storage Service (OSS) shown on the client side as measured by the open-source benchmark tools. However, the performance shown on the client side is subject to the influences of various factors, such as the request methods, the local resources (CPU, memory, and network), and the OSS network conditions.

1 1 1
Share on

Alibaba Clouder

2,605 posts | 747 followers

You may also like

Comments

Raja_KT March 12, 2019 at 5:12 am

Nice one. This is fast one. Unlike the ECS benchmarking, which maybe cumbersome and misleading as we are not aware of WLs etc ahead.