Developed and released by the Transaction Processing Performance Council (TPC), the TPC Benchmark C (TPC-C) is widely used in the industry to evaluate online transaction processing (OLTP) capabilities of databases. TPC-C mainly contains 10 tables with 5 business transaction models. The models include NewOrder for generating new orders, Payment for paying for orders, OrderStatus for querying recent orders, Delivery for delivering orders, and StockLevel for analyzing the shortage status of the inventory.
TPC-C measures the maximum qualified throughput (MQTh) of the system in transactions per minute (tpmC). The measurement is based on NewOrder transactions. That is, the number of new orders processed per minute is actually measured.
- Test results of common instances are mainly based on the 1,000 warehouses. Major tables and their data volumes are as follows: The bmsql_order_line table contains 300 million rows of data, the bmsql_stock table contains 100 million rows of data, and bmsql_customer, bmsql_history, and bmsql_oorder tables each contains 30 million rows of data.
- To verify the horizontal scaling capability of the DRDS instances, a TPC-C test for ultra-large instances are introduced. The resource utilization is nearly 10 times higher than that of common instances. In the stress test design for ultra-large instances, 10,000 warehouses were built, and three 32-core Elastic Compute Service (ECS) instances were required to serve as the stress test machines in TPC-C. This design prevents stress test machines from becoming a bottleneck.
- Test environment for the Enterprise Edition: 32-core 128 GB DRDS instance of the Enterprise Edition (16-core 64 GB per DRDS server node) and four 8-core 32 GB dedicated ApsaraDB RDS for MySQL instances 5.7
- Test environment for the Standard Edition: 16-core 64 GB DRDS instance of the Standard Edition (8-core 32 GB per DRDS server node) and four 4-core 32 GB dedicated ApsaraDB RDS for MySQL instances 5.7
- Test environment for ultra-large instances: 256-core 1024 GB DRDS instance of the Enterprise Edition (16-core 64 GB per DRDS server node) + Twelve 32-core 128 GB dedicated ApsaraDB RDS for MySQL instances 5.7
|Number of concurrent threads||tpmC for the Standard Edition||tpmC for the Enterprise Edition||tpmC for the ultra-large instances|
|1 client × 1,000 concurrent threads||65,735.14||101,620.8||/|
|6 clients × 1,000 concurrent threads||/||/||821,547.97|
First, prepare an ECS instance on which you will perform all the following operations such as preparing data and running stress tests.
- We recommend that you select VPC. ApsaraDB RDS for MySQL instances of the classic network type for some specifications may have insufficient inventory. You need to record the ID and name of the VPC. All the instances that are described in subsequent steps will be deployed in this VPC.
Create a DRDS instance and the corresponding ApsaraDB RDS for MySQL instances in the same VPC as the ECS instance. In the DRDS instance, create a database that you want to stress test, such as the tpcc database.
This topic uses the open-source BenchmarkSQL 5.0 as the TPC-C test tool.
By default, BenchmarkSQL does not support the MySQL protocol. You need to compile BenchmarkSQL to support the MySQL protocol.
You can download the adapted stress test package as an attachment from tpcc.tar.gz and decompress the package to the tpcc directory.
tar zxvf tpcc.tar.gz -C tpcc
Modify the appendices:
src/client/jTPCC.java (Add the MySQL type.)
src/client/jTPCCConnection.java (Add the support for MySQL syntax by adding an alias.)
src/LoadData/LoadData.java (Disable the mechanism of large transactions when the loader is loading data.)
src/LoadData/LoadDataWorker.java (Disable the mechanism of large transactions when the loader is loading data.)
run/funcs.sh (Add a MySQL type to the script.)
run/runDatabaseBuild.sh (Delete an unnecessary phase.)
run/runBenchmark.sh (Adjust the default JVM parameters.)
run/runLoader.sh (Adjust the default JVM parameters.)
run/sql.common/foreignKeys.sql (Comment out all creation tasks of foreign keys. DRDS does not support foreign keys.)
run/sql.common/indexCreates.sql (Comment out all creation tasks of primary keys, and retain creation tasks of only two indexes. By default, a MySQL database directly creates indexes when creating a table.)
run/sql.common/indexDrops.sql (Comment out all primary keys.)
run/sql.common/tableCreates.sql (Add a primary key and a shard key. DRDS requires that a shard key be specified.)
Create the props.mysql file in the tpcc/run directory.
// --------- env config --------- //
// The number of warehouses.
// The number of concurrent threads for importing data. Every 100 concurrent threads generate an estimated 20,000 transactions per second (TPS). You can adjust the number of concurrent threads based on the target TPS.
// The JVM memory size for runLoader.sh. The value is 4 GB by default for 100 concurrent loadWorkers. We recommend that you set this parameter to 16 GB if 500 concurrent loadWorkers are needed.
// The number of concurrent threads for the running business logic in the TPC-C stress test.
// The duration of a stress test. Unit: minutes.
// ---------- default config ------- //
//To run specified transactions per terminal- runMins must equal zero
//Number of total transactions per minute
//Set to true to run in 4.x compatible mode. Set to false to use the
//entire configured database evenly.
//The following five values must add up to 100
//The default percentages of 45, 43, 4, 4 & 4 match the TPC-C spec
// Directory name to create for collecting detailed result data.
// Comment this out to suppress.
// osCollectorDevices=net_eth0 blk_sda
The configurations are described as follows:
- When you import stress test data, note the number of warehouses and the number of concurrent loadWorkers.
- When you perform a TPC-C stress test, note the number of concurrent terminals and runMins, which indicates the duration of the stress test.
- Prepare stress test data for 1000 warehouses.
nohup ./runDatabaseBuild.sh props.mysql &
By default, 100 concurrent threads are used for importing a total of over 500 million rows of data. The overall import time is in hours. We recommend that you use nohup to push the task to the background to prevent the import task from being interrupted due to a break in the SSH command line.
- Run a TPC-C test.
After you run the test, you can view the test results. The metric for the stress test is tpmC.
08:56:16,844 [Thread-883] INFO jTPCC : Term-00, Measured tpmC (NewOrders) = 104230.88
08:56:16,844 [Thread-883] INFO jTPCC : Term-00, Measured tpmTOTAL = 231664.49
08:56:16,844 [Thread-883] INFO jTPCC : Term-00, Session Start = 2019-09-19 08:54:16
08:56:16,845 [Thread-883] INFO jTPCC : Term-00, Session End = 2019-09-19 08:56:16
08:56:16,845 [Thread-883] INFO jTPCC : Term-00, Transaction Count = 465440
- Clear stress test data.