All Products
Search
Document Center

PolarDB:TPC-C tests

Last Updated:Mar 28, 2026

This guide walks you through running TPC-C tests on a PolarDB-X instance to measure its Online Transaction Processing (OLTP) performance. The test covers MySQL 5.7 and MySQL 8.0 database engines across four instance specifications.

Note

These tests are based on the TPC-C benchmark but do not satisfy all of its requirements. Results cannot be compared with published TPC-C benchmark results.

Background

Transaction Processing Performance Council Benchmark C (TPC-C) is an industry-standard OLTP benchmark. It models a wholesale supplier with 10 tables and five transaction types:

Transaction typeDescription
NewOrderNew order generation
PaymentOrder payments
OrderStatusOrder status queries
DeliveryOrder deliveries
StockLevelInventory analysis

TPC-C measures performance using tpmC (transactions per minute C), which counts the number of NewOrder transactions processed per minute. This is the maximum qualified throughput (MQTh) of the system.

Test design

Data volume

The test uses 1,000 warehouses. Table row counts:

TableRow count
bmsql_order_line300 million
bmsql_stock100 million
bmsql_customer30 million
bmsql_history30 million
bmsql_oorder30 million

Instance specifications

Node specificationsNumber of nodes
4C32G2
4C32G4
8C64G2
8C64G4

Stress test client: ecs.g6.8xlarge (32 vCPUs, 128 GB memory)

Expected results at a glance

The following tables summarize the peak tpmC values across specifications and concurrency levels. Use these to select the instance size that matches your performance requirements before running the test.

MySQL 5.7

Instance specificationsPeak tpmCAt concurrency
4C32G × 2~108,894256 terminals
4C32G × 4~210,437512 terminals
8C64G × 2~156,307256 terminals
8C64G × 4~316,085512 terminals

MySQL 8.0

Instance specificationsPeak tpmCAt concurrency
4C32G × 2~88,074128 terminals
4C32G × 4~166,960256 terminals
8C64G × 2~118,498128 terminals
8C64G × 4~240,314256 terminals

Prerequisites

Before you begin, ensure that you have:

  • An Alibaba Cloud account with permissions to create Elastic Compute Service (ECS) and PolarDB-X instances

  • A virtual private cloud (VPC) — record its name and ID for use throughout this guide

Run the TPC-C test

Step 1: Create an ECS instance

Create an ECS instance with 32 vCPUs and 128 GB of memory (ecs.g6.8xlarge) to run the stress test client. This specification prevents the client from becoming a bottleneck when testing higher-spec PolarDB-X instances.

Note

Deploy the ECS instance in your VPC. All resources in this guide must reside in the same VPC.

Step 2: Create a PolarDB-X instance

  1. Create a PolarDB-X instance. Select MySQL 5.7 or MySQL 8.0 based on your requirements. See Create a PolarDB-X instance.

  2. Create a test database named tpcc_1000. See Create a database.

    CREATE DATABASE tpcc_1000;
Note

The ECS instance and the PolarDB-X instance must be in the same VPC.

Step 3: Configure instance parameters

Tune the compute nodes of the PolarDB-X instance to maximize stress test performance.

  1. Set the following parameters. See Parameter settings.

    ParameterValue
    ENABLE_COROUTINEtrue
    XPROTO_MAX_DN_CONCURRENT4000
    XPROTO_MAX_DN_WAIT_CONNECTION4000
  2. Connect to the PolarDB-X instance using a command-line client and run the following SQL statements in the same session to disable logging and CPU statistical sampling:

    set global RECORD_SQL=false;
    set global MPP_METRIC_LEVEL=0;
    set global ENABLE_CPU_PROFILE=false;
    set global ENABLE_TRANS_LOG=false;

Step 4: Prepare the test data

Important

Data loading is typically the most time-consuming stage of the entire TPC-C test. Use nohup to keep the import running in case your SSH session disconnects.

Install BenchmarkSQL

Note

BenchmarkSQL 5.0 does not support the MySQL protocol by default and must be compiled to add MySQL support. The package below includes this modification.

Download benchmarksql.tar.gz, upload it to the ECS instance, and extract it:

tar xzvf benchmarksql.tar.gz

Configure the test

Edit the props.mysql configuration file:

cd benchmarksql/run
vi props.mysql

Use the following configuration, replacing the placeholder values with your PolarDB-X connection details:

db=mysql
driver=com.mysql.jdbc.Driver
conn=jdbc:mysql://{HOST}:{PORT}/tpcc?readOnlyPropagatesToServer=false&rewriteBatchedStatements=true&failOverReadOnly=false&connectTimeout=3000&socketTimeout=90000&allowMultiQueries=true&clobberStreamingResults=true&characterEncoding=utf8&netTimeoutForStreamingResults=0&autoReconnect=true
user={USER}
password={PASSWORD}

warehouses=1000
loadWorkers=100

terminals=128
//To run specified transactions per terminal- runMins must equal zero
runTxnsPerTerminal=0
//To run for specified minutes- runTxnsPerTerminal must equal zero
runMins=5
//Number of total transactions per minute
limitTxnsPerMin=0

//Set to true to run in 4.x compatible mode. Set to false to use the
//entire configured database evenly.
terminalWarehouseFixed=true

//The following five values must add up to 100
//The default percentages of 45, 43, 4, 4 & 4 match the TPC-C spec
newOrderWeight=45
paymentWeight=43
orderStatusWeight=4
deliveryWeight=4
stockLevelWeight=4

// Directory name to create for collecting detailed result data.
// Comment this out to suppress.
resultDirectory=my_result_%tY-%tm-%td_%tH%tM%tS

// osCollectorScript=./misc/os_collector_linux.py
// osCollectorInterval=1
// osCollectorSSHAddr=user@dbhost
// osCollectorDevices=net_eth0 blk_sda

Key parameters:

ParameterDescription
connConnection string for the PolarDB-X instance. Set {HOST} and {PORT} to your instance's endpoint.
userUsername for the PolarDB-X instance
passwordPassword for the username
warehousesNumber of warehouses (test data scale)
loadWorkersNumber of concurrent workers for data import
terminalsNumber of concurrent terminals during the stress test
runMinsStress test duration, in minutes

Import the test data

cd benchmarksql/run/sql.common
cp tableCreates.sql.auto tableCreates.sql
cd ..
nohup ./runDatabaseBuild.sh props.mysql &

This imports more than 500 million rows using 100 concurrent loadWorkers. The process takes several hours.

Verify data integrity

After the import completes, connect to the PolarDB-X instance and run the following queries. If all result sets are empty, the data is complete and consistent.

select a.* from (Select w_id, w_ytd from bmsql_warehouse) a left join (select d_w_id, sum(d_ytd) as d_ytd_sum from bmsql_district group by d_w_id) b on a.w_id = b.d_w_id and a.w_ytd = b.d_ytd_sum where b.d_w_id is null;

select a.* from (Select d_w_id, d_id, D_NEXT_O_ID - 1 as d_n_o_id from bmsql_district) a left join (select o_w_id, o_d_id, max(o_id) as o_id_max from bmsql_oorder group by  o_w_id, o_d_id) b on a.d_w_id = b.o_w_id and a.d_id = b.o_d_id and a.d_n_o_id = b.o_id_max where b.o_w_id is null;

select a.* from (Select d_w_id, d_id, D_NEXT_O_ID - 1 as d_n_o_id from bmsql_district) a left join (select no_w_id, no_d_id, max(no_o_id) as no_id_max from bmsql_new_order group by no_w_id, no_d_id) b on a.d_w_id = b.no_w_id and a.d_id = b.no_d_id and a.d_n_o_id = b.no_id_max where b.no_id_max is null;

select * from (select (count(no_o_id)-(max(no_o_id)-min(no_o_id)+1)) as diff from bmsql_new_order group by no_w_id, no_d_id) a where diff != 0;

select a.* from (select o_w_id, o_d_id, sum(o_ol_cnt) as o_ol_cnt_cnt from bmsql_oorder  group by o_w_id, o_d_id) a left join (select ol_w_id, ol_d_id, count(ol_o_id) as ol_o_id_cnt from bmsql_order_line group by ol_w_id, ol_d_id) b on a.o_w_id = b.ol_w_id and a.o_d_id = b.ol_d_id and a.o_ol_cnt_cnt = b.ol_o_id_cnt where b.ol_w_id is null;

select a.* from (select d_w_id, sum(d_ytd) as d_ytd_sum from bmsql_district group by d_w_id) a left join (Select w_id, w_ytd from bmsql_warehouse) b on a.d_w_id = b.w_id and a.d_ytd_sum = b.w_ytd where b.w_id is null;

Step 5: Run the stress test

cd benchmarksql/run
./runBenchmark.sh props.mysql

The test prints real-time tpmC values while running and displays the final average tpmC when it completes. The output looks similar to:

[2024/07/16 17:00:05.845] Average tpmC: 306979.98 Current tpmC: 308052.00 Memory Usage: 661MB / 3584MB
[2024/07/16 17:00:10.845] Average tpmC: 307029.91 Current tpmC: 309876.00 Memory Usage: 1024MB / 3584MB
[2024/07/16 17:00:15.845] Average tpmC: 307111.09 Current tpmC: 311820.00 Memory Usage: 429MB / 3584MB
[2024/07/16 17:00:20.846] Average tpmC: 307108.95 Current tpmC: 306982.60 Memory Usage: 780MB / 3584MB
17:00:21,011 [Thread-508] INFO jTPCC : Term-00, Measured tpmC (NewOrders) = 307058.27
17:00:21,011 [Thread-508] INFO jTPCC : Term-00, Measured tpmTOTAL = 681928.39
17:00:21,011 [Thread-508] INFO jTPCC : Term-00, Transaction Count = 3411539
17:00:21,011 [Thread-508] INFO jTPCC : Term-00,
17:00:21,011 [Thread-508] INFO jTPCC : Term-00,
17:00:21,011 [Thread-508] INFO jTPCC : Term-00, Session Start = 2024-07-16 16:55:20
17:00:21,011 [Thread-508] INFO jTPCC : Term-00, Session End = 2024-07-16 17:00:21

The key metric is Measured tpmC (NewOrders) — the average number of NewOrder transactions per minute over the test duration.

Results

MySQL 5.7

Note

Version: polardb-2.4.0_5.4.19-20240610_xcluster5.4.19-20240527. For more information, see .

image
Instance specificationstpmC at 64 terminalstpmC at 128 terminalstpmC at 256 terminalstpmC at 512 terminalstpmC at 1,024 terminals
4C32G × 291,858.36107,268.04108,894.07106,442.3798,404.9
4C32G × 4117,329.67178,425.44206,076.83210,437.44209,218.11
8C64G × 2106,817.2145,097.95156,306.53152,141.55138,043.88
8C64G × 4117,888.5208,051.75296,975.34316,085.12307,058.28

MySQL 8.0

Note

Version: polardb-2.4.0_5.4.19-20240610_xcluster8.4.19-20240523. For more information, see .

image
Instance specificationstpmC at 64 terminalstpmC at 128 terminalstpmC at 256 terminalstpmC at 512 terminalstpmC at 1,024 terminals
4C32G × 271,546.1988,074.2685,959.8379,893.7867,109.97
4C32G × 488,000.71133,363.91166,959.56166,549.16158,594.61
8C64G × 289,677.54118,498.21117,536.04114,574.98101,338.24
8C64G × 4101,735.81171,088.45240,313.94236,207.55203,671.55