All Products
Search
Document Center

PolarDB:Performance test results of PolarDB for MySQL 8.0.1 Cluster Edition

Last Updated:Oct 17, 2025

This topic describes the online transactional processing (OLTP) performance test results for PolarDB for MySQL 8.0.1 Cluster Edition.

Note

For more information, see OLTP performance testing.

Dedicated specifications

The performance test data in this topic was collected from a PolarDB cluster that consists of a single read/write node and a single read-only node. For large-scale testing, we adjusted endpoints and optimized key parameters to minimize link latency and fully utilize the resources of the PolarDB cluster.

Parameter adjustments

Parameter adjustments for large-scale specifications

For the 120-core and 920 GB specification, we adjusted the following parameters:

Note

DBNodeClassCPU specifies the number of CPU cores in the current compute node.

Parameter

Before modification

After modification

Optimization description

loose_innodb_lock_sys_rec_partition

1

120

Adjusts the number of lock system partitions to match the number of CPU cores. This reduces lock contention in high-concurrency scenarios and improves concurrency performance.

loose_thread_pool_size

{DBNodeClassCPU × 2}

{DBNodeClassCPU × 1}

Dynamically adjusts the thread pool size based on the number of CPU cores to optimize resource allocation and prevent excessive thread competition.

loose_innodb_csn_lockfree

OFF

ON

Enables the lock-free Commit Sequence Number (CSN) mechanism to improve transaction commit efficiency and reduce lock overhead.

Parameter optimization for other specifications

For other specifications, we configured the following parameters to ensure optimal cluster performance:

Parameter

Value

Description

loose_innodb_lock_sys_rec_partition

{DBNodeClassCPU}

Adjusts the number of lock system partitions to match the number of CPU cores. This reduces lock contention in high-concurrency scenarios and improves concurrency performance.

Performance test results

Note

This performance test uses the sysbench tool to stress the cluster by incrementally increasing the number of concurrent threads. The test starts with a low number of concurrent threads, such as 1, and then increases the number in increments, such as 8, 16, 32, 64, and 128. At each concurrency level, the test runs until key performance indicators, such as queries per second (QPS) and transactions per second (TPS), stabilize. To ensure data reliability, the test continues to run for a period after the performance curve stabilizes. The average value from this stable period is recorded as the performance data for that concurrency level. The test stops when the average QPS and TPS values no longer increase as more concurrent threads are added. The final performance result is the highest peak QPS and TPS values recorded across all concurrency test rounds.

Peak read-only performance

The following figure and table show the performance test results for each specification.

Note

In the read-only scenario, the parameter --range-selects=0 is used. This configuration disables range queries and effectively tests the oltp_point_selects scenario, which focuses on point-select queries.

image.png

Specifications

2 cores, 8 GB

2 cores, 16 GB

4 cores, 16 GB

4 cores, 32 GB

8 cores, 32 GB

8 cores, 64 GB

16 cores, 32 GB

16 cores, 64 GB

16 cores, 128 GB

32 cores, 128 GB

32 cores, 256 GB

64 cores, 512 GB

120 cores, 920 GB

Read-only QPS

123234.31

128407.45

261992.95

263557.53

518849.56

514733

919903.31

968769

922508.84

1435099.19

1431018.25

1975797.41

3805887

Read-only TPS

12323.43

12840.74

26199.3

26355.75

51884.96

51473.3

91990.33

96876.9

92250.88

143509.91

143101.84

197579.74

380588.7

Peak read-write performance

The following figure and table show the performance test results for each specification.

image.png

Specifications

2 cores, 8 GB

2 cores, 16 GB

4 cores, 16 GB

4 cores, 32 GB

8 cores, 32 GB

8 cores, 64 GB

16 cores, 32 GB

16 cores, 64 GB

16 cores, 128 GB

32 cores, 128 GB

32 cores, 256 GB

64 cores, 512 GB

120 cores, 920 GB

Read-write QPS

34659.68

38909.52

90971.5

95615.19

197093.04

198704.02

387869.61

390841.8

380663.09

687255

663323.67

984014.28

2195884

Read-write TPS

1732.98

1945.48

4548.57

4780.76

9854.65

9935.2

19393.48

19542.09

19033.15

34362.74

33166.17

49200.72

109794.20

Peak write performance

The following figure and table show the performance test results for each specification.

image.png

Specifications

2 cores, 8 GB

2 cores, 16 GB

4 cores, 16 GB

4 cores, 32 GB

8 cores, 32 GB

8 cores, 64 GB

16 cores, 32 GB

16 cores, 64 GB

16 cores, 128 GB

32 cores, 128 GB

32 cores, 256 GB

64 cores, 512 GB

120 cores, 920 GB

Write QPS

23914.56

25353.92

65187.37

63711.64

135037.31

127477.78

231777.32

237859.68

232331.81

381489.46

377614.51

608817.22

879742.76

Write TPS

3985.76

4225.65

10864.56

10618.61

22506.22

21246.3

38629.56

39643.28

38721.97

63581.58

62935.76

101469.57

146623.8