This document covers the test environment, tooling, methodology, and results for a performance benchmark of Tair (Enterprise Edition) SSD-based instances.
Test environment
| Item | Description |
|---|---|
| Region and zone | Hangzhou Zone I |
| Instance architecture | Standard master-replica architecture. For details, see Standard architecture. |
| Stress testing host | Elastic Compute Service (ECS) instance of the ecs.g6e.13xlarge type. For details, see Overview of instance families. |
| Instance types tested | tair.localssd.c1m4.2xlarge, tair.localssd.c1m4.4xlarge, tair.localssd.c1m4.8xlarge |
The benchmark covers two scenarios that represent opposite ends of the memory utilization spectrum:
Memory larger than data volume: Most data fits in memory. Memory-to-data ratio is approximately 7:1. This represents an in-memory-dominant access pattern.
Data volume larger than memory: Only a portion of data is cached in memory, and most read/write requests require disk I/O. Memory-to-data ratio is approximately 1:4. This represents an SSD-dominant access pattern.
Test tool
The benchmark uses YCSB, an open-source Java-based database benchmarking framework.
The YCSB source code is modified to support a LONG-typed recordcount parameter and to test Redis string commands. Download the modified YCSB source package.
Test script
The following script runs performance tests for the data-volume-larger-than-memory scenario:
#!/bin/bash
ip=192.168.0.23
port=3100
timeout=30000
command_group=string
recordcount=640000000
run_operationcount=20000000
fieldcount=1
fieldlength=100
threads=32
load_sleep_time=600
run_sleep_time=60
echo "##################################### $command_group ############################################"
# Load
./bin/ycsb load redis -s -P workloads/workloada \
-p "redis.host=${ip}" -p "redis.port=${port}" \
-p "recordcount=${recordcount}" -p "operationcount=${recordcount}" \
-p "redis.timeout=${timeout}" -p "redis.command_group=${command_group}" \
-p "fieldcount=${fieldcount}" -p "fieldlength=${fieldlength}" \
-threads ${threads}
sleep ${load_sleep_time}
# Uniform-Read
./bin/ycsb run redis -s -P workloads/workloadc \
-p "redis.host=${ip}" -p "redis.port=${port}" \
-p "recordcount=${recordcount}" -p "operationcount=${run_operationcount}" \
-p "redis.timeout=${timeout}" -p "redis.command_group=${command_group}" \
-p "fieldcount=${fieldcount}" -p "fieldlength=${fieldlength}" \
-p "requestdistribution=uniform" -threads ${threads}
sleep ${run_sleep_time}
# Zipfian-Read
./bin/ycsb run redis -s -P workloads/workloadc \
-p "redis.host=${ip}" -p "redis.port=${port}" \
-p "recordcount=${recordcount}" -p "operationcount=${run_operationcount}" \
-p "redis.timeout=${timeout}" -p "redis.command_group=${command_group}" \
-p "fieldcount=${fieldcount}" -p "fieldlength=${fieldlength}" \
-p "requestdistribution=zipfian" -threads ${threads}
sleep ${run_sleep_time}
# Uniform-50%Read-50%Update
./bin/ycsb run redis -s -P workloads/workloada \
-p "redis.host=${ip}" -p "redis.port=${port}" \
-p "recordcount=${recordcount}" -p "operationcount=${run_operationcount}" \
-p "redis.timeout=${timeout}" -p "redis.command_group=${command_group}" \
-p "fieldcount=${fieldcount}" -p "fieldlength=${fieldlength}" \
-p "requestdistribution=uniform" -threads ${threads}Parameters
| Parameter | Description |
|---|---|
ip | IP address of the Tair instance |
port | Service port of the Tair instance |
timeout | Command timeout. Unit: ms |
command_group | Data type to test. Set to string |
recordcount | Number of records loaded during the data loading phase |
run_operationcount | Number of operations during the run phase. For the memory-larger-than-data scenario, set this to the value of recordcount. For the data-larger-than-memory scenario, set this to recordcount ÷ 32. |
fieldcount | Number of fields per record. Set to 1 |
fieldlength | Length of each field in bytes. Set to 100 |
threads | Number of YCSB client threads. Varies by instance type. |
Test metrics
| Metric | Description |
|---|---|
| QPS | Number of read and write operations processed per second |
| Average latency | Average latency of read or write operations. Unit: µs |
| 99th percentile latency | Highest latency for the fastest 99% of operations. Unit: µs. For example, a value of 500 µs means 99% of operations complete within 500 µs. |
Test results
Memory size larger than data volume
Memory-to-data ratio is approximately 7:1. Most requests are served from memory.
| Instance type | YCSB configuration | Workload | QPS | Average latency (µs) | 99th percentile latency (µs) |
|---|---|---|---|---|---|
tair.localssd.c1m4.2xlarge | recordcount=40,000,000 run_operationcount=40,000,000 threads=64 | Load | 59,830 | 1,066 | 2,761 |
| Uniform-Read | 158,221 | 389 | 891 | ||
| Zipfian-Read | 164,233 | 379 | 873 | ||
| Uniform-50%Read-50%Update | 78,099 | Read: 651 / Update: 974 | Read: 2,012 / Update: 2,731 | ||
tair.localssd.c1m4.4xlarge | recordcount=80,000,000 run_operationcount=80,000,000 threads=128 | Load | 91,991 | 1,388 | 3,077 |
| Uniform-Read | 302,940 | 414 | 921 | ||
| Zipfian-Read | 305,639 | 410 | 899 | ||
| Uniform-50%Read-50%Update | 124,929 | Read: 798 / Update: 1,234 | Read: 2,231 / Update: 3,013 | ||
tair.localssd.c1m4.8xlarge | recordcount=160,000,000 run_operationcount=160,000,000 threads=256 | Load | 132,865 | 1,924 | 3,323 |
| Uniform-Read | 489,287 | 513 | 1,313 | ||
| Zipfian-Read | 501,847 | 499 | 1,272 | ||
| Uniform-50%Read-50%Update | 187,390 | Read: 1,069 / Update: 1,644 | Read: 2,749 / Update: 3,613 |
Data volume larger than memory size
Memory-to-data ratio is approximately 1:4. Most requests require disk I/O, so latency is higher than in the memory-dominant scenario above.
| Instance type | YCSB configuration | Workload | QPS | Average latency (µs) | 99th percentile latency (µs) |
|---|---|---|---|---|---|
tair.localssd.c1m4.2xlarge | recordcount=1,280,000,000 run_operationcount=1,280,000,000 threads=64 | Load | 50,396 | 1,258 | 4,463 |
| Uniform-Read | 74,611 | 842 | 1,745 | ||
| Zipfian-Read | 106,366 | 588 | 1,406 | ||
| Uniform-50%Read-50%Update | 47,833 | Read: 1,232 / Write: 1,402 | Read: 4,049 / Write: 4,583 | ||
tair.localssd.c1m4.4xlarge | recordcount=2,560,000,000 run_operationcount=2,560,000,000 threads=128 | Load | 81,097 | 1,573 | 4,119 |
| Uniform-Read | 118,141 | 1,071 | 3,085 | ||
| Zipfian-Read | 194,704 | 634 | 1,595 | ||
| Uniform-50%Read-50%Update | 75,625 | Read: 1,562 / Update: 1,795 | Read: 4,999 / Update: 5,419 | ||
tair.localssd.c1m4.8xlarge | recordcount=5,120,000,000 run_operationcount=5,120,000,000 threads=256 | Load | 115,660 | 2,210 | 5,235 |
| Uniform-Read | 202,365 | 1,252 | 3,985 | ||
| Zipfian-Read | 309,019 | 804 | 2,551 | ||
| Uniform-50%Read-50%Update | 122,318 | Read: 1,861 / Update: 2,307 | Read: 5,603 / Update: 6,415 |