All Products
Search
Document Center

Container Service for Kubernetes:Performance testing for strmvol volumes

Last Updated:Mar 26, 2026

This page provides benchmark data for strmvol volumes across two dimensions: metadata index building efficiency and data read throughput. Use this data to evaluate whether strmvol meets your workload's performance requirements and to select the right resourceLimit configuration.

Important

The following figures are theoretical values obtained in a controlled test environment. Actual performance depends on your operating environment.

Metadata index building

When no pod on a node is mounting a strmvol volume, the first pod to mount it triggers the node mount initialization process. During this phase, the system creates virtual block devices and builds metadata indexes for OSS files. The pod stays in the ContainerCreating state until initialization completes.

Test environment: ecs.g8i.2xlarge, cn-beijing region

The table below compares index building time and resource consumption between erofs (Alibaba Cloud Linux 3) and ext4 (other operating systems).

Number of files under the OSS mount target erofs — building time erofs — memory peak erofs — CPU peak ext4 — building time ext4 — memory peak ext4 — CPU peak
100,000 4.09s 125 MB 113% 6.96s 150 MB 116%
1,000,000 11.07s 871 MB 201% 35.37s 512 MB 192%
10,000,000 130.59s 8.7 GB 247% 407.00s 2.4 GB 253%

Data read performance

Choose the right test scenario

Before reading the tables below, identify which scenario matches your workload:

How resourceLimit affects performance

resourceLimit caps the CPU and memory that the strmvol process can use on a node. This cap determines the maximum read throughput the volume can reach:

  • Below the cap: all resourceLimit configurations deliver similar throughput.

  • At the cap: throughput stops scaling with additional concurrency. On non-Alibaba Cloud Linux 3 systems, all resourceLimit configurations reach their throughput ceiling at 64 concurrent reads.

  • To raise the cap: select a higher resourceLimit tier (for example, move from 2c4g to 4c8g).

Random read, small files, direct mode enabled

Test environment: ecs.g7nex.32xlarge, Alibaba Cloud Linux 3, cn-beijing region Test scenario: 100 KB image files, random reads, direct mode enabled

Concurrency Throughput (MB/s) Throughput (img/s) Performance ceiling
4 11.53 101.06
8 21.99 192.62
16 48.01 417.95
32 93.90 817.45
64 180.88 1,577.12 non-Alibaba Cloud Linux 3 ceiling
128 312.82 2,727.48 2c4g ceiling
256 513.54 4,475.20 4c8g ceiling
512 974.47 8,491.96 8c16g ceiling
1,024 1,306.61 11,386.33 16c32g ceiling

Sequential read and single-stream large files, direct mode disabled

Test environment: ecs.g7nex.32xlarge, Alibaba Cloud Linux 3, cn-beijing region Test scenario: 100 KB image files (sequential, 256 concurrent requests) and single-stream large files, direct mode disabled

The preset resourceLimit values are tuned for optimal performance across general read-only workloads.

Resource limit Sequential read — 256 concurrent (MB/s) Sequential read — 256 concurrent (img/s) Single-stream large file (MB/s)
2c4g 349.89 2,742.05 216
4c8g 789.52 6,187.34 342
8c16g 1,446.17 11,333.37 548
16c32g 2,383.38 18,678.12 926
Note

In the single-stream large file scenario, 8c16g mode delivers 2.5–2.7 GB/s throughput. If your workload has specific data reading characteristics that the preset values do not cover, submit a ticket to request assistance.

Solution comparison

Test environment: ecs.g8i.2xlarge, Alibaba Cloud Linux 3, cn-beijing region

The table compares strmvol against ossfs under equivalent conditions. Direct mode is enabled only for the small-file random read scenario (128 KB text files).

Solution (configuration) 4-thread random read (MB/s) 4-thread sequential read (MB/s) Single-stream large file (MB/s)
ossfs — default configuration 8.4 8.4 179.2
ossfs — direct read enabled, memory pool 1 GB 3.4 3.4 293.4
strmvol — 2c4g 24.9 40.0 196.8
strmvol — 4c8g 95.6 147.1 334.5