This topic describes how to use TPC Benchmark C (TPC-C) tools to evaluate the online transaction processing (OLTP) capabilities of Distributed Relational Database Service (DRDS)databases. To evaluate the performance of your database system, you can test and compare the capabilities of databases by following the instructions described in this topic.

Background information

TPC-C is a benchmark that is widely used to evaluate the OLTP capabilities of databases. It is developed and released by Transaction Processing Performance Council (TPC). TPC-C involves 10 tables of five business transaction models. The models are NewOrder for generating new orders, Payment for paying for orders, OrderStatus for querying recent orders, Delivery for delivering orders, and StockLevel for analyzing the inventory shortage. TPC-C uses transactions-per-minute-C (tpmC) to measure the maximum qualified throughput (MQTh) of a system. The measurement is based on NewOrder transactions. This means that the number of new orders processed per minute is measured.

Note This implementation of TPC-C is derived from the TPC-C Benchmark and is not comparable to published TPC-C Benchmark results, as this implementation does not comply with all the requirements of the TPC-C Benchmark.

Test design

  • Amount of test data
    • Test results of regular instances are obtained based on 1,000 warehouses. The following list describes the amount of data in each major table:
      • The bmsql_order_line table contains 300 million rows of data.
      • The bmsql_stock table contains 100 million rows of data.
      • The bmsql_customer, bmsql_history, and bmsql_oorder tables each contain 30 million rows of data.
    • A TPC-C test for ultra-large instances is used to verify the horizontal scaling capability of DRDS. The specifications of ultra-large instances are nearly 10 times higher than those of regular instances. In the stress test designed for ultra-large instances, 10,000 warehouses are built, and three 32-core Elastic Compute Service (ECS) instances are required to serve as the stress test machines for the TPC-C test. This design prevents stress test machines from becoming a bottleneck.
  • Instance specifications for the TPC-C test
    • Test environment for Enterprise Edition instances: One 32-core 128 GB DRDS Enterprise Edition instance that contains two 16-core 64 GB nodes, and four 8-core 32 GB dedicated ApsaraDB RDS for MySQL 5.7 instances
    • Test environment for Standard Edition instances: One 16-core 64 GB DRDS Standard Edition instance that contains two 8-core 32 GB nodes, and four 4-core 32 GB dedicated ApsaraDB RDS for MySQL 5.7 instances
    • Test environment for ultra-large instances: One 256-core 1024 GB DRDS Enterprise Edition instance that contains sixteen 16-core 64 GB nodes, and twelve 32-core 128 GB dedicated ApsaraDB RDS for MySQL 5.7 instances

Procedure

  • Step 1: Create an ECS instance
    Create an ECS instance and use it as the stress test machine. The following operations such as preparing test data and running stress tests are performed on this ECS instance.
    Note We recommend that you create an ECS instance that is deployed in a virtual private cloud (VPC). The resources used to create ApsaraDB RDS for MySQL instances of some specific instance types in the classic network may be insufficient. Record the name and the ID of the VPC for future use. You must deploy all the instances that are described in subsequent steps in this VPC.
  • Step 2: Create a DRDS instance
    1. Create a DRDS instance for the stress test. For more information about how to create a DRDS instance, see Step 1: Create a DRDS instance.
    2. Create a database to be tested in the instance. In this example, a database named tpcc is created. For more information about how to create a database, see Step 2: Create a database
    Note Make sure that the ECS instance and the DRDS instance are in the same VPC.
  • Step 3: Prepare data for the stress test
    1. Prepare a stress test tool
      Note
      1. Download the compiled stress test package from tpcc.tar.gz and run the following commands on the ECS instance to decompress it to the tpcc directory.
        mkdir tpcc 
        tar zxvf tpcc.tar.gz -C tpcc        
      2. Modify the extracted files based on the following description:
        src/client/jTPCC.java (Add the MySQL type.)
        src/client/jTPCCConnection.java (Add the support for MySQL syntax by adding an alias.)
        src/LoadData/LoadData.java (Disable the large transaction mechanism while the loader is loading data.)
        src/LoadData/LoadDataWorker.java (Disable the large transaction mechanism while the loader is loading data.)
        run/funcs.sh (Add the MySQL type to the script.)
        run/runDatabaseBuild.sh (Delete unnecessary phases.)
        run/runBenchmark.sh (Adjust the default Java virtual machine (JVM) parameters.)
        run/runLoader.sh (Adjust the default JVM parameters.)
        run/sql.common/foreignKeys.sql (Comment out all statements that create foreign keys. DRDS does not support foreign keys.)
        run/sql.common/indexCreates.sql (Comment out all statements that create primary keys and retain only two statements that create indexes. By default, MySQL directly creates indexes when it creates a table.)
        run/sql.common/indexDrops.sql (Comment out all statements that delete primary keys.)
        run/sql.com mon/tableCreates.sql (Add primary keys and shard keys. Shard keys are required for DRDS to create tables.)
    2. Configure the stress test
      Run the following commands on the ECS instance to create the props.mysql file in the tpcc/run directory:
      // --------- env config --------- // 
      db=mysql
      driver=com.mysql.jdbc.Driver
      conn=jdbc:mysql://drdsxxxx:3306/tpcc?
      useSSL=false&useServerPrepStmts=false&useConfigs=maxPerformance&rewriteBatchedStatements=true
      user=tpcc
      password=tpcc
      
      // The number of warehouses.
      warehouses=1000
      // The number of concurrent loadWorkers that are used to import data. Every 100 concurrent loadWorkers generate an estimated 20,000 transactions per second (TPS). You can change the number of concurrent loadWorkers based on your business requirements for TPS.
      // By default, the JVM memory size for 100 concurrent loadWorkers is 4 GB. You can change the JVM memory size by modifying the runLoader.sh file. If the number of concurrent loadWorkers is set to 500, we recommend that you set the JVM memory size to 16 GB.
      loadWorkers=100
      
      // The number of concurrent terminals in the TPC-C stress test.
      terminals=1000
      // The duration of the stress test. Unit: minutes.
      runMins=10
      // ---------- default config ------- //
      //To run specified transactions per terminal- runMins must equal zero
      runTxnsPerTerminal=0
      //Number of total transactions per minute
      limitTxnsPerMin=0
      
      //Set to true to run in 4.x compatible mode. Set to false to use the
      //entire configured database evenly.
      terminalWarehouseFixed=true
      
      //The following five values must add up to 100
      //The default percentages of 45, 43, 4, 4 & 4 match the TPC-C spec
      newOrderWeight=45
      paymentWeight=43
      orderStatusWeight=4
      deliveryWeight=4
      stockLevelWeight=4
      
      // Directory name to create for collecting detailed result data.
      // Comment this out to suppress.
      resultDirectory=my_result_%tY-%tm-%td_%tH%tM%tS
      
      // osCollectorScript=./misc/os_collector_linux.py
      // osCollectorInterval=1
      // osCollectorSSHAddr=user@dbhost
      // osCollectorDevices=net_eth0 blk_sda
      Note
      • When you import test data, make sure that the warehouses and loadWorkers parameters are properly configured.
      • When you run a TPC-C test, make sure that the terminals and runMins parameters are properly configured.
    3. Run the stress test
      1. Run the following commands on the ECS instance to import the test data:
        cd tpcc/run
        nohup ./runDatabaseBuild.sh props.mysql &
        Note By default, 100 concurrent loadWorkers are used to import a total of more than 500 million rows of data. It takes several hours to import the data. We recommend that you use nohup to push the import task to the background. This can prevent the task from being interrupted due to Secure Shell (SSH) disconnections.
      2. Run the following commands to run the test:
        cd tpcc/run
        ./runBenchmark.sh props.mysql
        After the test is complete, the following test results are returned:
        08:56:16,844 [Thread-883] INFO   jTPCC : Term-00, Measured tpmC (NewOrders) = 104230.88
        08:56:16,844 [Thread-883] INFO   jTPCC : Term-00, Measured tpmTOTAL = 231664.49
        08:56:16,844 [Thread-883] INFO   jTPCC : Term-00, Session Start     = 2019-09-19 08:54:16
        08:56:16,845 [Thread-883] INFO   jTPCC : Term-00, Session End       = 2019-09-19 08:56:16
        08:56:16,845 [Thread-883] INFO   jTPCC : Term-00, Transaction Count = 465440
        Note The tpmC metric shows the test results. For more information about the test results, see Test results.
      3. Run the following command to clear the test data:
        cd tpcc/run
        ./runDatabaseDestroy.sh props.mysql

Test results

Number of concurrent threads tpmC for Standard Edition instances tpmC for Enterprise Edition instances tpmC for ultra-large instances
1 terminal × 1,000 concurrent loadWorkers 65,735.14 101,620.8 /
6 terminals × 1,000 concurrent loadWorkers / / 821,547.97
1