All Products
Search
Document Center

PolarDB:TPC-C test

Last Updated:Dec 28, 2023

This topic describes how to use TPC Benchmark C (TPC-C) tools to evaluate the online transaction processing (OLTP) capabilities of PolarDB-X databases. To evaluate the performance of your database system, you can follow the procedures described in this topic to test your databases and compare the test results.

Background information

TPC-C is a benchmark that is widely used to evaluate the OLTP capabilities of databases. It is developed and released by Transaction Processing Performance Council (TPC). TPC-C involves 10 tables of five business transaction models. The models are NewOrder for generating new orders, Payment for paying for orders, OrderStatus for querying recent orders, Delivery for delivering orders, and StockLevel for analyzing the inventory shortage. TPC-C uses transactions-per-minute-C (tpmC) to measure the maximum qualified throughput (MQTh) of a system. The measurement is based on NewOrder transactions. This means that the number of new orders processed per minute is measured.

Note

The TPC-C performance tests described in this topic are implemented based on the TPC-C benchmark test but cannot meet all requirements of the TPC-C benchmark test. Therefore, the test results described in this topic cannot be compared with the published results of the TPC-C benchmark test.

Test design

  • Amount of test data

    The test results in this topic are obtained based on 1,000 warehouses. The following list describes the amount of data in each major table:

    • The bmsql_order_line table contains 300 million rows of data.

    • The bmsql_stock table contains 100 million rows of data.

    • Each of the bmsql_customer, bmsql_history, and bmsql_oorder tables contains 30 million rows of data.

  • Instance specifications for a Sysbench test

    Node specifications

    Number of nodes

    4 vCPUs, 32 GB of memory

    2

    4 vCPUs, 32 GB of memory

    4

    8 vCPUs, 64 GB of memory

    2

    8 vCPUs, 64 GB of memory

    4

  • ECS instance type for stress testing

    ecs.g6.8xlarge (32 vCPUs, 128 GB of memory)

Procedure

  1. Create an Elastic Compute Service (ECS) instance for stress testing.

    Create an ECS instance that is used to prepare data and perform stress tests. To prevent performance bottlenecks when you test clusters with high specifications, we recommend that you create an ECS instance with 32 vCPUs and 128 GB of memory.

    Note

    The ECS instance must be deployed in a virtual private cloud (VPC). Record the name and the ID of the VPC for future use. You must deploy all database instances that are described in subsequent steps in this VPC.

  2. Create a PolarDB-X instance for stress testing.

    1. Create a PolarDB-X instance. For more information about how to create a PolarDB-X instance, see Create a PolarDB-X instance.

      Note

      Make sure that the ECS instance and the PolarDB-X instance are in the same VPC.

    2. Create a database to be tested in the PolarDB-X instance. In this example, a database named tpcc_1000 is created.

      CREATE DATABASE tpcc_1000  MODE = 'auto';
  3. Adjust instance parameters.

    Note

    To achieve optimal performance in stress tests, adjust some parameters of the compute node of the PolarDB-X instance.

    1. Set the ENABLE_COROUTINE parameter to true and the XPROTO_MAX_DN_CONCURRENT and XPROTO_MAX_DN_WAIT_CONNECTION parameters to 4000. For more information, see Parameter settings.

    2. Connect to the PolarDB-X instance by using a command-line client. Then, execute the following SQL statements in the same session to disable logging and CPU statistic sampling:

      set global RECORD_SQL=false;
      set global MPP_METRIC_LEVEL=0;
      set global ENABLE_CPU_PROFILE=false;
      set global ENABLE_BACKGROUND_STATISTIC_COLLECTION=false;
      set global ENABLE_STATISTIC_FEEDBACK=false;
      set global ENABLE_DEADLOCK_DETECTION=false;
      set global ENABLE_TRANS_LOG=false;
      set global GROUP_PARALLELISM=1;
      set global CONN_POOL_MAX_POOL_SIZE=500;
      set global ENABLE_STATEMENTS_SUMMARY=false;
      set global ENABLE_AUTO_SAVEPOINT=false;
      set global INNODB_ADAPTIVE_HASH_INDEX=off;
      set global TABLE_OPEN_CACHE = 20000;
      set global SHARE_READ_VIEW = false;
      set global CONN_POOL_XPROTO_XPLAN = true;
      set global NEW_SEQ_GROUPING_TIMEOUT=30000;
  4. Prepare data for stress testing.

    1. Prepare a stress testing tool.

      Note

      By default, BenchmarkSQL does not support the MySQL protocol. You must compile BenchmarkSQL to support the MySQL protocol. In this example, open source BenchmarkSQL 5.0 is used as a TPC-C test tool.

      Run the following command to download and decompress the stress test tool package benchmarksql.tar.gz:

      tar xzvf benchmarksql.tar.gz
    2. Configure the stress test.

      Specify the connection information about the PolarDB-X instance in the configuration file props.mysql.

      cd benchmarksql/run
      vi props.mysql

      The following code shows the content of the configuration file. The parameters contained in the configuration file are described after the code.

      db=mysql
      driver=com.mysql.jdbc.Driver
      conn=jdbc:mysql://{HOST}:{PORT}/tpcc?readOnlyPropagatesToServer=false&rewriteBatchedStatements=true&failOverReadOnly=false&connectTimeout=3000&socketTimeout=90000&allowMultiQueries=true&clobberStreamingResults=true&characterEncoding=utf8&netTimeoutForStreamingResults=0&autoReconnect=true
      user={USER}
      password={PASSWORD}
      
      warehouses=1000
      loadWorkers=100
      
      terminals=128
      //To run specified transactions per terminal- runMins must equal zero
      runTxnsPerTerminal=0
      //To run for specified minutes- runTxnsPerTerminal must equal zero
      runMins=5
      //Number of total transactions per minute
      limitTxnsPerMin=0
      
      //Set to true to run in 4.x compatible mode. Set to false to use the
      //entire configured database evenly.
      terminalWarehouseFixed=true
      
      //The following five values must add up to 100
      //The default percentages of 45, 43, 4, 4 & 4 match the TPC-C spec
      newOrderWeight=45
      paymentWeight=43
      orderStatusWeight=4
      deliveryWeight=4
      stockLevelWeight=4
      
      // Directory name to create for collecting detailed result data.
      // Comment this out to suppress.
      resultDirectory=my_result_%tY-%tm-%td_%tH%tM%tS
      
      // osCollectorScript=./misc/os_collector_linux.py
      // osCollectorInterval=1
      // osCollectorSSHAddr=user@dbhost
      // osCollectorDevices=net_eth0 blk_sda
      • conn: the connection string used to connect to the PolarDB-X instance. You must specify the value of the {HOST} and {PORT} field.

      • user: the username used to log on to the PolarDB-X instance.

      • password: the password used to log on to the PolarDB-X instance.

      • warehouses: the number of warehouses.

      • loadWorkers: the number of concurrent loadWorkers that are used to import data.

      • terminals: the number of concurrent terminals in the TPC-C stress test.

      • runMins: the duration of the stress test. Unit: minute.

    3. Import the stress test data

      Run the following commands on the ECS instance to import the stress test data:

      cd benchmarksql/run/sql.common
      cp tableCreates.sql.auto  tableCreates.sql
      cd ..
      nohup ./runDatabaseBuild.sh props.mysql &
      Note

      By default, 100 concurrent loadWorkers are used to import a total of more than 500 million rows of data. It takes several hours to import the data. We recommend that you use nohup to push the import task to the background. This can prevent the task from being interrupted due to Secure Shell (SSH) disconnections.

    4. Verify data integrity

      Connect to the PolarDB-X instance by using the command-line client and execute the following SQL statements. If the returned result set is empty, the test data is complete.

      select a.* from (Select w_id, w_ytd from bmsql_warehouse) a left join (select d_w_id, sum(d_ytd) as d_ytd_sum from bmsql_district group by d_w_id) b on a.w_id = b.d_w_id and a.w_ytd = b.d_ytd_sum where b.d_w_id is null;
      
      select a.* from (Select d_w_id, d_id, D_NEXT_O_ID - 1 as d_n_o_id from bmsql_district) a left join (select o_w_id, o_d_id, max(o_id) as o_id_max from bmsql_oorder group by  o_w_id, o_d_id) b on a.d_w_id = b.o_w_id and a.d_id = b.o_d_id and a.d_n_o_id = b.o_id_max where b.o_w_id is null;
      
      select a.* from (Select d_w_id, d_id, D_NEXT_O_ID - 1 as d_n_o_id from bmsql_district) a left join (select no_w_id, no_d_id, max(no_o_id) as no_id_max from bmsql_new_order group by no_w_id, no_d_id) b on a.d_w_id = b.no_w_id and a.d_id = b.no_d_id and a.d_n_o_id = b.no_id_max where b.no_id_max is null;
      
      select * from (select (count(no_o_id)-(max(no_o_id)-min(no_o_id)+1)) as diff from bmsql_new_order group by no_w_id, no_d_id) a where diff != 0;
      
      select a.* from (select o_w_id, o_d_id, sum(o_ol_cnt) as o_ol_cnt_cnt from bmsql_oorder  group by o_w_id, o_d_id) a left join (select ol_w_id, ol_d_id, count(ol_o_id) as ol_o_id_cnt from bmsql_order_line group by ol_w_id, ol_d_id) b on a.o_w_id = b.ol_w_id and a.o_d_id = b.ol_d_id and a.o_ol_cnt_cnt = b.ol_o_id_cnt where b.ol_w_id is null;
      
      select a.* from (select d_w_id, sum(d_ytd) as d_ytd_sum from bmsql_district group by d_w_id) a left join (Select w_id, w_ytd from bmsql_warehouse) b on a.d_w_id = b.w_id and a.d_ytd_sum = b.w_ytd where b.w_id is null;
  5. Perform stress testing.

    Run the following commands to perform the TPC-C test:

    cd benchmarksql/run
    ./runBenchmark.sh props.mysql

    During stress testing, you can view the real-time tpmC values. After the test is complete, the average tpmC value is displayed. The following example shows the results of stress testing.

    [2022/01/26 22:00:22.995]    Average tpmC: 228425.52    Current tpmC: 235152.00    Memory Usage: 1001MB / 3584MB
    [2022/01/26 22:00:27.995]    Average tpmC: 228641.71    Current tpmC: 235764.00    Memory Usage: 571MB / 3584MB
    [2022/01/26 22:00:32.995]    Average tpmC: 228816.06    Current tpmC: 234744.00    Memory Usage: 101MB / 3584MB
    [2022/01/26 22:00:37.995]    Average tpmC: 229017.73    Current tpmC: 236076.00    Memory Usage: 705MB / 3584MB
    
    22:00:38,071 [Thread-132] INFO   jTPCC : Term-00,
    22:00:38,071 [Thread-132] INFO   jTPCC : Term-00,
    22:00:38,071 [Thread-132] INFO   jTPCC : Term-00, Measured tpmC (NewOrders) = 228974.04
    22:00:38,071 [Thread-132] INFO   jTPCC : Term-00, Measured tpmTOTAL = 509473.72
    22:00:38,071 [Thread-132] INFO   jTPCC : Term-00, Session Start     = 2022-01-26 21:57:37
    22:00:38,072 [Thread-132] INFO   jTPCC : Term-00, Session End       = 2022-01-26 22:00:38
    22:00:38,072 [Thread-132] INFO   jTPCC : Term-00, Transaction Count = 1529074

Test results (Version 5.4.15-16704996)

456789

Specifications

tpmC for 64 concurrent terminals

tpmC for 128 concurrent terminals

tpmC for 256 concurrent terminals

tpmC for 512 concurrent terminals

tpmC for 1024 concurrent terminals

4 vCPUs, 32 GB of memory × 2

45678.47

53782.37

57207.99

58555.13

58480.75

4 vCPUs, 32 GB of memory × 4

70967.94

93894.19

103129.87

108333.55

109577.55

8 vCPUs, 64 GB of memory × 2

69844.47

83187.97

85713.53

84503.02

88038.26

8 vCPUs, 64 GB of memory × 4

89328.72

136759.65

167809.46

175860.5

175353.3

Test results (Version 5.4.15-16715927)

456789

Specifications

tpmC for 64 concurrent terminals

tpmC for 128 concurrent terminals

tpmC for 256 concurrent terminals

tpmC for 512 concurrent terminals

tpmC for 1024 concurrent terminals

4 vCPUs, 32 GB of memory × 2

48302.1

54025.71

56791.2

57970.9

58525.67

4 vCPUs, 32 GB of memory × 4

70136.84

93234.02

105241.67

108523.41

110534.65

8 vCPUs, 64 GB of memory × 2

67648.21

81783.57

86289.67

84795.9

85566.34

8 vCPUs, 64 GB of memory × 4

89747.01

139412.56

166902.1

175364.84

175885.95

Test results (Version 5.4.16-16717637)

456789

Specifications

tpmC for 64 concurrent terminals

tpmC for 128 concurrent terminals

tpmC for 256 concurrent terminals

tpmC for 512 concurrent terminals

tpmC for 1024 concurrent terminals

4 vCPUs, 32 GB of memory × 2

49446.92

51847.91

55700.14

59902.5

61152.11

4 vCPUs, 32 GB of memory × 4

65141.8

93897.91

106942.28

112468.97

108613.16

8 vCPUs, 64 GB of memory × 2

68142.57

81461.78

84999.79

84479.3

86595.75

8 vCPUs, 64 GB of memory × 4

89103.31

138964.28

170095.07

176831.98

178269.1

Test results (Version 5.4.17-16835173)

45678

Specifications

tpmC for 64 concurrent terminals

tpmC for 128 concurrent terminals

tpmC for 256 concurrent terminals

tpmC for 512 concurrent terminals

tpmC for 1024 concurrent terminals

4 vCPUs, 32 GB of memory × 2

47269.9

52457.07

56329.34

57915.96

58362.74

4 vCPUs, 32 GB of memory × 4

77575.18

99646.09

111541.96

119054.9

120512.74

8 vCPUs, 64 GB of memory × 2

79745.5

93985.89

102638.59

109948.82

109605.32

8 vCPUs, 64 GB of memory × 4

99152.31

150250.64

175110.19

199713.18

202801.12

Test results (Version 5.4.18-16977056)UIUI.png

Specifications

tpmC for 64 concurrent terminals

tpmC for 128 concurrent terminals

tpmC for 256 concurrent terminals

tpmC for 512 concurrent terminals

tpmC for 1024 concurrent terminals

4 vCPUs, 32 GB of memory × 2

47314.31

55319.77

57606.85

57029.15

56038.31

4 vCPUs, 32 GB of memory × 2

79258.68

106685.44

117155.52

124144.74

131597.53

8 vCPUs, 64 GB of memory × 2

76933.01

88738.15

97725.2

103391.1

106294.78

8 vCPUs, 64 GB of memory × 4

106772.47

159845.31

182079.45

197052.19

198381.5