All Products
Search
Document Center

Tair (Redis® OSS-Compatible):Performance whitepaper of ESSD-based instances

Last Updated:Mar 28, 2026

This whitepaper covers the test environment, tool, methodology, and results for performance testing of Tair (Enterprise Edition) ESSD-based instances.

These results reflect a specific test environment and are not performance guarantees or SLA commitments. Run your own benchmarks against your actual workload to determine the right instance type for your application.

Test environment

ItemDescription
Region and zoneHangzhou Zone I
Instance architectureStandard master-replica architecture. See Standard architecture.
Stress testing machineElastic Compute Service (ECS) instance of the ecs.g6e.13xlarge type. See Overview of instance families.
ESSD instance types testedtair.essd.standard.xlarge, tair.essd.standard.2xlarge, tair.essd.standard.4xlarge, tair.essd.standard.8xlarge, tair.essd.standard.13xlarge

Test scenarios

The tests cover two scenarios that produce significantly different performance characteristics:

  • Memory size larger than data volume — Most data fits in memory (memory:data ratio ≈ 7:1). Read requests are served from memory with minimal disk access.

  • Data volume larger than memory size — Only part of the dataset is cached. Most requests require disk reads or writes (memory:data ratio ≈ 1:4).

Expect significantly lower QPS in Scenario 2 compared to Scenario 1 for the same instance type. This difference is expected behavior due to the increased disk access required when data exceeds available memory.

Test tool

The tests use YCSB (Yahoo Cloud Serving Benchmark), an open-source Java tool for benchmarking database performance.

The YCSB source code is modified to accept a LONG-typed recordcount value and to test string commands in Redis. Download the modified YCSB source code to reproduce the tests.

Workloads

Each test run uses one of the following workloads. See Core workloads for full YCSB workload definitions.

WorkloadYCSB configOperationsPurpose
Data loadingworkloada100% SET (strings)Populate the dataset before read/update tests
Uniform-Readworkloadc + requestdistribution=uniform100% GET (random keys)Measure read throughput under worst-case access distribution
Zipfian-Readworkloadc + requestdistribution=zipfian100% GET (Zipfian distribution)Measure read throughput when a small portion of keys receives most traffic — the typical production pattern
Uniform-50%Read-50%Updateworkloada + requestdistribution=uniform50% GET + 50% SET (random keys)Measure mixed read/update performance

Run the tests

The commands below run all four workloads sequentially. The example values (recordcount, threads) shown in the script correspond to the tair.essd.standard.xlarge instance in the data-volume-larger-than-memory scenario. Adjust these values to match the instance type you are testing — refer to the per-instance values in the results tables.

Step 1: Load data

./bin/ycsb load redis -s -P workloads/workloada \
  -p "redis.host=${ip}" \
  -p "redis.port=${port}" \
  -p "recordcount=${recordcount}" \
  -p "operationcount=${recordcount}" \
  -p "redis.timeout=${timeout}" \
  -p "redis.command_group=${command_group}" \
  -p "fieldcount=${fieldcount}" \
  -p "fieldlength=${fieldlength}" \
  -threads ${threads}

sleep ${load_sleep_time}   # Allow data to flush to disk before reading

Step 2: Test read throughput (uniform distribution)

./bin/ycsb run redis -s -P workloads/workloadc \
  -p "redis.host=${ip}" \
  -p "redis.port=${port}" \
  -p "recordcount=${recordcount}" \
  -p "operationcount=${run_operationcount}" \
  -p "redis.timeout=${timeout}" \
  -p "redis.command_group=${command_group}" \
  -p "fieldcount=${fieldcount}" \
  -p "fieldlength=${fieldlength}" \
  -p "requestdistribution=uniform" \
  -threads ${threads}

sleep ${run_sleep_time}

Step 3: Test read throughput (Zipfian distribution)

./bin/ycsb run redis -s -P workloads/workloadc \
  -p "redis.host=${ip}" \
  -p "redis.port=${port}" \
  -p "recordcount=${recordcount}" \
  -p "operationcount=${run_operationcount}" \
  -p "redis.timeout=${timeout}" \
  -p "redis.command_group=${command_group}" \
  -p "fieldcount=${fieldcount}" \
  -p "fieldlength=${fieldlength}" \
  -p "requestdistribution=zipfian" \
  -threads ${threads}

sleep ${run_sleep_time}

Step 4: Test mixed read/update throughput

./bin/ycsb run redis -s -P workloads/workloada \
  -p "redis.host=${ip}" \
  -p "redis.port=${port}" \
  -p "recordcount=${recordcount}" \
  -p "operationcount=${run_operationcount}" \
  -p "redis.timeout=${timeout}" \
  -p "redis.command_group=${command_group}" \
  -p "fieldcount=${fieldcount}" \
  -p "fieldlength=${fieldlength}" \
  -p "requestdistribution=uniform" \
  -threads ${threads}

Script parameters

ParameterDescription
ipIP address of the Tair instance
portService port of the Tair instance
timeoutCommand timeout in milliseconds
command_groupData type to test. Set to string
recordcountNumber of keys loaded during the data loading phase. See per-instance values in the results tables
run_operationcountNumber of operations per test run. For memory > data: set equal to recordcount. For data > memory: set to recordcount ÷ 32
fieldcountNumber of fields per key. Set to 1
fieldlengthLength of each field in bytes. Set to 100
threadsNumber of YCSB client threads. The thread counts used in this whitepaper are selected to saturate each instance type. See per-instance values in the results tables
load_sleep_timeWait time in seconds after loading data before starting read tests. Set to 600
run_sleep_timeWait time in seconds between consecutive test runs. Set to 60

Test metrics

MetricUnitDescription
QPSops/secRead and write operations processed per second
Average latencyµsAverage time per read or write operation
99th percentile latencyµsThe latency threshold below which 99% of operations complete. For example, a value of 500 µs means 99% of operations finish within 500 µs

Test results

How to read these results

QPS and latency vary significantly between the two scenarios because of how ESSD-based instances handle data access:

  • Scenario 1 (memory > data): Most reads are served from memory, so QPS is high and latency is low.

  • Scenario 2 (data > memory): Most reads require disk access, so QPS drops and latency increases. Within this scenario, Zipfian-Read still benefits from hot-key caching and outperforms Uniform-Read significantly.

Expect significantly lower QPS in Scenario 2 compared to Scenario 1 for the same instance type. This is expected behavior, not a defect.

Scenario 1: Memory size larger than data volume

Memory:data ratio ≈ 7:1. Results reflect near-memory-speed performance.

内存大于数据场景的测试结果
Instance typeYCSB configurationWorkloadQPSAverage latency (µs)99th percentile latency (µs)
tair.essd.standard.xlarge

recordcount=20,000,000

run_operationcount=20,000,000

threads=32

Load36,7408511,595
Uniform-Read103,890294907
Zipfian-Read106,357288865
Uniform-50%Read-50%Update46,610Read: 530 / Update: 795Read: 1,108 / Update: 1,684
tair.essd.standard.2xlarge

recordcount=40,000,000

run_operationcount=40,000,000

threads=50

Load54,6709111,528
Uniform-Read150,796314995
Zipfian-Read151,110314977
Uniform-50%Read-50%Update69,137Read: 537 / Update: 878Read: 948 / Update: 1,479
tair.essd.standard.4xlarge

recordcount=80,000,000

run_operationcount=80,000,000

threads=100

Load90,7031,0991,697
Uniform-Read285,8333391,196
Zipfian-Read288,7503351,162
Uniform-50%Read-50%Update110,316Read: 757 / Update: 1,041Read: 1,114 / Update: 1,536
tair.essd.standard.8xlarge

recordcount=160,000,000

run_operationcount=160,000,000

threads=120

Load117,5811,0111,692
Uniform-Read477,099242784
Zipfian-Read494,550234727
Uniform-50%Read-50%Update196,245Read: 519 / Update: 691Read: 829 / Update: 1,096
tair.essd.standard.13xlarge

recordcount=240,000,000

run_operationcount=240,000,000

threads=160

Load126,3661,2492,281
Uniform-Read673,183231637
Zipfian-Read691,383230652
Uniform-50%Read-50%Update197,803Read: 678 / Update: 935Read: 940 / Update: 1,925

Scenario 2: Data volume larger than memory size

Memory:data ratio ≈ 1:4. Most requests require disk access, so QPS is lower and latency is higher compared to Scenario 1.

数据大于内存场景测试结果
Instance typeYCSB configurationWorkloadQPSAverage latency (µs)99th percentile latency (µs)
tair.essd.standard.xlarge

recordcount=640,000,000

run_operationcount=20,000,000

threads=32

Load25,5611,2453,497
Uniform-Read25,7271,2392,042
Zipfian-Read47,5596671,217
Uniform-50%Read-50%Update19,731Read: 1,576 / Update: 1,639Read: 6,383 / Update: 6,487
tair.essd.standard.2xlarge

recordcount=1,280,000,000

run_operationcount=40,000,000

threads=50

Load42,2871,1793,465
Uniform-Read35,7941,3941,880
Zipfian-Read77,7596371,219
Uniform-50%Read-50%Update28,656Read: 1,716 / Update: 1,761Read: 8,863 / Update: 8,951
tair.essd.standard.4xlarge

recordcount=2,560,000,000

run_operationcount=80,000,000

threads=100

Load65,9231,5146,615
Uniform-Read44,7532,2327,903
Zipfian-Read120,3378261,382
Uniform-50%Read-50%Update38,470Read: 2,577 / Update: 2,617Read: 8,535 / Update: 8,583
tair.essd.standard.8xlarge

recordcount=5,120,000,000

run_operationcount=160,000,000

threads=120

Load89,2311,3409,575
Uniform-Read51,1752,3432,955
Zipfian-Read131,3179111,573
Uniform-50%Read-50%Update38,930Read: 3,063 / Update: 3,097Read: 8,695 / Update: 8,735
tair.essd.standard.13xlarge

recordcount=7,680,000,000

run_operationcount=240,000,000

threads=160

Load92,1631,7339,879
Uniform-Read51,2673,51016,623
Zipfian-Read138,5221,1522,131
Uniform-50%Read-50%Update39,584Read: 4,022 / Update: 4,057Read: 12,159 / Update: 12,239