This guide walks you through running TPC-C tests on a PolarDB-X instance to measure its Online Transaction Processing (OLTP) performance. The test covers MySQL 5.7 and MySQL 8.0 database engines across four instance specifications.
These tests are based on the TPC-C benchmark but do not satisfy all of its requirements. Results cannot be compared with published TPC-C benchmark results.
Background
Transaction Processing Performance Council Benchmark C (TPC-C) is an industry-standard OLTP benchmark. It models a wholesale supplier with 10 tables and five transaction types:
| Transaction type | Description |
|---|---|
| NewOrder | New order generation |
| Payment | Order payments |
| OrderStatus | Order status queries |
| Delivery | Order deliveries |
| StockLevel | Inventory analysis |
TPC-C measures performance using tpmC (transactions per minute C), which counts the number of NewOrder transactions processed per minute. This is the maximum qualified throughput (MQTh) of the system.
Test design
Data volume
The test uses 1,000 warehouses. Table row counts:
| Table | Row count |
|---|---|
bmsql_order_line | 300 million |
bmsql_stock | 100 million |
bmsql_customer | 30 million |
bmsql_history | 30 million |
bmsql_oorder | 30 million |
Instance specifications
| Node specifications | Number of nodes |
|---|---|
| 4C32G | 2 |
| 4C32G | 4 |
| 8C64G | 2 |
| 8C64G | 4 |
Stress test client: ecs.g6.8xlarge (32 vCPUs, 128 GB memory)
Expected results at a glance
The following tables summarize the peak tpmC values across specifications and concurrency levels. Use these to select the instance size that matches your performance requirements before running the test.
MySQL 5.7
| Instance specifications | Peak tpmC | At concurrency |
|---|---|---|
| 4C32G × 2 | ~108,894 | 256 terminals |
| 4C32G × 4 | ~210,437 | 512 terminals |
| 8C64G × 2 | ~156,307 | 256 terminals |
| 8C64G × 4 | ~316,085 | 512 terminals |
MySQL 8.0
| Instance specifications | Peak tpmC | At concurrency |
|---|---|---|
| 4C32G × 2 | ~88,074 | 128 terminals |
| 4C32G × 4 | ~166,960 | 256 terminals |
| 8C64G × 2 | ~118,498 | 128 terminals |
| 8C64G × 4 | ~240,314 | 256 terminals |
Prerequisites
Before you begin, ensure that you have:
An Alibaba Cloud account with permissions to create Elastic Compute Service (ECS) and PolarDB-X instances
A virtual private cloud (VPC) — record its name and ID for use throughout this guide
Run the TPC-C test
Step 1: Create an ECS instance
Create an ECS instance with 32 vCPUs and 128 GB of memory (ecs.g6.8xlarge) to run the stress test client. This specification prevents the client from becoming a bottleneck when testing higher-spec PolarDB-X instances.
Deploy the ECS instance in your VPC. All resources in this guide must reside in the same VPC.
Step 2: Create a PolarDB-X instance
Create a PolarDB-X instance. Select MySQL 5.7 or MySQL 8.0 based on your requirements. See Create a PolarDB-X instance.
Create a test database named
tpcc_1000. See Create a database.CREATE DATABASE tpcc_1000;
The ECS instance and the PolarDB-X instance must be in the same VPC.
Step 3: Configure instance parameters
Tune the compute nodes of the PolarDB-X instance to maximize stress test performance.
Set the following parameters. See Parameter settings.
Parameter Value ENABLE_COROUTINEtrueXPROTO_MAX_DN_CONCURRENT4000XPROTO_MAX_DN_WAIT_CONNECTION4000Connect to the PolarDB-X instance using a command-line client and run the following SQL statements in the same session to disable logging and CPU statistical sampling:
set global RECORD_SQL=false; set global MPP_METRIC_LEVEL=0; set global ENABLE_CPU_PROFILE=false; set global ENABLE_TRANS_LOG=false;
Step 4: Prepare the test data
Data loading is typically the most time-consuming stage of the entire TPC-C test. Use nohup to keep the import running in case your SSH session disconnects.
Install BenchmarkSQL
BenchmarkSQL 5.0 does not support the MySQL protocol by default and must be compiled to add MySQL support. The package below includes this modification.
Download benchmarksql.tar.gz, upload it to the ECS instance, and extract it:
tar xzvf benchmarksql.tar.gzConfigure the test
Edit the props.mysql configuration file:
cd benchmarksql/run
vi props.mysqlUse the following configuration, replacing the placeholder values with your PolarDB-X connection details:
db=mysql
driver=com.mysql.jdbc.Driver
conn=jdbc:mysql://{HOST}:{PORT}/tpcc?readOnlyPropagatesToServer=false&rewriteBatchedStatements=true&failOverReadOnly=false&connectTimeout=3000&socketTimeout=90000&allowMultiQueries=true&clobberStreamingResults=true&characterEncoding=utf8&netTimeoutForStreamingResults=0&autoReconnect=true
user={USER}
password={PASSWORD}
warehouses=1000
loadWorkers=100
terminals=128
//To run specified transactions per terminal- runMins must equal zero
runTxnsPerTerminal=0
//To run for specified minutes- runTxnsPerTerminal must equal zero
runMins=5
//Number of total transactions per minute
limitTxnsPerMin=0
//Set to true to run in 4.x compatible mode. Set to false to use the
//entire configured database evenly.
terminalWarehouseFixed=true
//The following five values must add up to 100
//The default percentages of 45, 43, 4, 4 & 4 match the TPC-C spec
newOrderWeight=45
paymentWeight=43
orderStatusWeight=4
deliveryWeight=4
stockLevelWeight=4
// Directory name to create for collecting detailed result data.
// Comment this out to suppress.
resultDirectory=my_result_%tY-%tm-%td_%tH%tM%tS
// osCollectorScript=./misc/os_collector_linux.py
// osCollectorInterval=1
// osCollectorSSHAddr=user@dbhost
// osCollectorDevices=net_eth0 blk_sdaKey parameters:
| Parameter | Description |
|---|---|
conn | Connection string for the PolarDB-X instance. Set {HOST} and {PORT} to your instance's endpoint. |
user | Username for the PolarDB-X instance |
password | Password for the username |
warehouses | Number of warehouses (test data scale) |
loadWorkers | Number of concurrent workers for data import |
terminals | Number of concurrent terminals during the stress test |
runMins | Stress test duration, in minutes |
Import the test data
cd benchmarksql/run/sql.common
cp tableCreates.sql.auto tableCreates.sql
cd ..
nohup ./runDatabaseBuild.sh props.mysql &This imports more than 500 million rows using 100 concurrent loadWorkers. The process takes several hours.
Verify data integrity
After the import completes, connect to the PolarDB-X instance and run the following queries. If all result sets are empty, the data is complete and consistent.
select a.* from (Select w_id, w_ytd from bmsql_warehouse) a left join (select d_w_id, sum(d_ytd) as d_ytd_sum from bmsql_district group by d_w_id) b on a.w_id = b.d_w_id and a.w_ytd = b.d_ytd_sum where b.d_w_id is null;
select a.* from (Select d_w_id, d_id, D_NEXT_O_ID - 1 as d_n_o_id from bmsql_district) a left join (select o_w_id, o_d_id, max(o_id) as o_id_max from bmsql_oorder group by o_w_id, o_d_id) b on a.d_w_id = b.o_w_id and a.d_id = b.o_d_id and a.d_n_o_id = b.o_id_max where b.o_w_id is null;
select a.* from (Select d_w_id, d_id, D_NEXT_O_ID - 1 as d_n_o_id from bmsql_district) a left join (select no_w_id, no_d_id, max(no_o_id) as no_id_max from bmsql_new_order group by no_w_id, no_d_id) b on a.d_w_id = b.no_w_id and a.d_id = b.no_d_id and a.d_n_o_id = b.no_id_max where b.no_id_max is null;
select * from (select (count(no_o_id)-(max(no_o_id)-min(no_o_id)+1)) as diff from bmsql_new_order group by no_w_id, no_d_id) a where diff != 0;
select a.* from (select o_w_id, o_d_id, sum(o_ol_cnt) as o_ol_cnt_cnt from bmsql_oorder group by o_w_id, o_d_id) a left join (select ol_w_id, ol_d_id, count(ol_o_id) as ol_o_id_cnt from bmsql_order_line group by ol_w_id, ol_d_id) b on a.o_w_id = b.ol_w_id and a.o_d_id = b.ol_d_id and a.o_ol_cnt_cnt = b.ol_o_id_cnt where b.ol_w_id is null;
select a.* from (select d_w_id, sum(d_ytd) as d_ytd_sum from bmsql_district group by d_w_id) a left join (Select w_id, w_ytd from bmsql_warehouse) b on a.d_w_id = b.w_id and a.d_ytd_sum = b.w_ytd where b.w_id is null;Step 5: Run the stress test
cd benchmarksql/run
./runBenchmark.sh props.mysqlThe test prints real-time tpmC values while running and displays the final average tpmC when it completes. The output looks similar to:
[2024/07/16 17:00:05.845] Average tpmC: 306979.98 Current tpmC: 308052.00 Memory Usage: 661MB / 3584MB
[2024/07/16 17:00:10.845] Average tpmC: 307029.91 Current tpmC: 309876.00 Memory Usage: 1024MB / 3584MB
[2024/07/16 17:00:15.845] Average tpmC: 307111.09 Current tpmC: 311820.00 Memory Usage: 429MB / 3584MB
[2024/07/16 17:00:20.846] Average tpmC: 307108.95 Current tpmC: 306982.60 Memory Usage: 780MB / 3584MB
17:00:21,011 [Thread-508] INFO jTPCC : Term-00, Measured tpmC (NewOrders) = 307058.27
17:00:21,011 [Thread-508] INFO jTPCC : Term-00, Measured tpmTOTAL = 681928.39
17:00:21,011 [Thread-508] INFO jTPCC : Term-00, Transaction Count = 3411539
17:00:21,011 [Thread-508] INFO jTPCC : Term-00,
17:00:21,011 [Thread-508] INFO jTPCC : Term-00,
17:00:21,011 [Thread-508] INFO jTPCC : Term-00, Session Start = 2024-07-16 16:55:20
17:00:21,011 [Thread-508] INFO jTPCC : Term-00, Session End = 2024-07-16 17:00:21The key metric is Measured tpmC (NewOrders) — the average number of NewOrder transactions per minute over the test duration.
Results
MySQL 5.7
Version: polardb-2.4.0_5.4.19-20240610_xcluster5.4.19-20240527. For more information, see .

| Instance specifications | tpmC at 64 terminals | tpmC at 128 terminals | tpmC at 256 terminals | tpmC at 512 terminals | tpmC at 1,024 terminals |
|---|---|---|---|---|---|
| 4C32G × 2 | 91,858.36 | 107,268.04 | 108,894.07 | 106,442.37 | 98,404.9 |
| 4C32G × 4 | 117,329.67 | 178,425.44 | 206,076.83 | 210,437.44 | 209,218.11 |
| 8C64G × 2 | 106,817.2 | 145,097.95 | 156,306.53 | 152,141.55 | 138,043.88 |
| 8C64G × 4 | 117,888.5 | 208,051.75 | 296,975.34 | 316,085.12 | 307,058.28 |
MySQL 8.0
Version: polardb-2.4.0_5.4.19-20240610_xcluster8.4.19-20240523. For more information, see .

| Instance specifications | tpmC at 64 terminals | tpmC at 128 terminals | tpmC at 256 terminals | tpmC at 512 terminals | tpmC at 1,024 terminals |
|---|---|---|---|---|---|
| 4C32G × 2 | 71,546.19 | 88,074.26 | 85,959.83 | 79,893.78 | 67,109.97 |
| 4C32G × 4 | 88,000.71 | 133,363.91 | 166,959.56 | 166,549.16 | 158,594.61 |
| 8C64G × 2 | 89,677.54 | 118,498.21 | 117,536.04 | 114,574.98 | 101,338.24 |
| 8C64G × 4 | 101,735.81 | 171,088.45 | 240,313.94 | 236,207.55 | 203,671.55 |