This topic describes how to use TPC Benchmark C (TPC-C) tools to evaluate the online transaction processing (OLTP) capabilities of PolarDB-X 1.0 databases. You can follow the procedure described in this topic to test your databases and compare the test results to evaluate the performance of your database system.

Background information

TPC-C is a benchmark that is widely used to evaluate the OLTP capabilities of databases. It is developed and released by the Transaction Processing Performance Council (TPC). TPC-C involves 10 tables of five business transaction models. The models are NewOrder for generating new orders, Payment for paying for orders, OrderStatus for querying recent orders, Delivery for delivering orders, and StockLevel for analyzing inventory shortage. TPC-C uses transactions-per-minute-C (tpmC) to measure the maximum qualified throughput (MQTh) of a system. The measurement is based on NewOrder transactions. This means that the number of new orders processed per minute is measured.

Note The TPC-C performance test described in this topic is implemented based on the TPC-C benchmark test but do not meet all requirements of the TPC-C benchmark test. Therefore, the test results described in this topic cannot be compared with the published results of the TPC-C benchmark test.

Test design

  • Amount of test data
    • Test results of regular instances are obtained based on 1,000 warehouses. The following list describes the amount of data in each major table:
      • The bmsql_order_line table contains 300 million rows of data.
      • The bmsql_stock table contains 100 million rows of data.
      • The bmsql_customer, bmsql_history, and bmsql_oorder tables each contain 30 million rows of data.
    • A TPC-C test for ultra-large instances is also introduced to verify the horizontal scalability of PolarDB-X 1.0. The specifications of ultra-large instances are nearly 10 times higher than those of regular instances. In the stress test designed for ultra-large instances, 10,000 warehouses are built, and three 32-core Elastic Compute Service (ECS) instances are required to serve as stress test machines for the TPC-C test. This design prevents stress test machines from becoming a bottleneck.
  • Instance specifications for the TPC-C test
    • Test environment for Enterprise Edition instances: one PolarDB-X 1.0 Enterprise Edition instance that contains two nodes each with 16 CPU cores and 64 GB of memory, and four dedicated ApsaraDB RDS for MySQL 5.7 instances with 8 CPU cores and 32 GB of memory.
    • Test environment for Standard Edition instances: one PolarDB-X 1.0 Standard Edition instance that contains two nodes each with 8 CPU cores and 32 GB of memory, and four dedicated ApsaraDB RDS for MySQL 5.7 instances with 4 CPU cores and 32 GB of memory.
    • Test environment for ultra-large instances: one PolarDB-X 1.0 Enterprise Edition instance that contains sixteen nodes each with 16 CPU cores and 64 GB of memory, and twelve dedicated ApsaraDB RDS for MySQL 5.7 instances with 32 CPU cores and 128 GB of memory.

Test procedure

  • Step 1: Create an ECS instance
    Create an ECS instance and use it as the stress test machine. The following operations such as preparing test data and running the stress test are performed on this ECS instance.
    Note We recommend that you create an ECS instance that is deployed in a virtual private cloud (VPC). The resources used to create ApsaraDB RDS for MySQL instances of some instance types in the classic network may be insufficient. Record the name and the ID of the VPC for future use. You must deploy all the instances that are described in subsequent steps in this VPC.
  • Step 2: Create a PolarDB-X 1.0 instance for the stress test
    1. Create a PolarDB-X 1.0 instance. For more information about how to create a PolarDB-X 1.0 instance, see Create a instance.
    2. Create a database to be tested in the instance. In this example, a database named tpcc is created. For more information about how to create a database, see Create a database.
    Note Make sure that the ECS instance and the PolarDB-X 1.0 instance are in the same VPC.
  • Step 3: Prepare data for the stress test
    1. Prepare a stress test tool
      Note
      • In this example, the open source BenchmarkSQL V5.0 is used as the TPC-C test tool.
      • By default, BenchmarkSQL does not support the MySQL protocol. You must compile BenchmarkSQL to support the MySQL protocol.
      1. Download the compiled stress test package tpcc.tar.gz and run the following command on the ECS instance to decompress it to the tpcc directory:
        mkdir tpcc 
        tar zxvf tpcc.tar.gz -C tpcc        
      2. Modify the extracted files based on the following description:
        src/client/jTPCC.java (Add the MySQL type.)
        src/client/jTPCCConnection.java (Add the support for MySQL syntax by adding an alias.)
        src/LoadData/LoadData.java (Disable the large transaction mechanism while the loader is loading data.)
        src/LoadData/LoadDataWorker.java (Disable the large transaction mechanism while the loader is loading data.)
        run/funcs.sh (Add the MySQL type to the script.)
        run/runDatabaseBuild.sh (Delete unnecessary phases.)
        run/runBenchmark.sh (Adjust the default Java virtual machine (JVM) parameters.)
        run/runLoader.sh (Adjust the default JVM parameters.)
        run/sql.common/foreignKeys.sql (Comment out all statements that create foreign keys. PolarDB-X 1.0 does not support foreign keys.)
        run/sql.common/indexCreates.sql (Comment out all statements that create primary keys and retain only two statements that create indexes. By default, MySQL directly creates indexes when it creates a table.)
        run/sql.common/indexDrops.sql (Comment out all statements that delete primary keys.)
        run/sql.common/tableCreates.sql (Add primary keys and shard keys. Shard keys are required for PolarDB-X 1.0 to create tables.)
        The following example shows the content of the configuration file and describes major parameters.
        // --------- env config --------- // 
        db=mysql
        driver=com.mysql.jdbc.Driver
        conn=jdbc:mysql://drdsxxxx:3306/tpcc?
        useSSL=false&useServerPrepStmts=false&useConfigs=maxPerformance&rewriteBatchedStatements=true
        user=tpcc
        password=tpcc
        
        // The number of warehouses.
        warehouses=1000
        // The number of concurrent loadWorkers that are used to import data. Every 100 concurrent loadWorkers are expected to generate 20,000 transactions per second (TPS). You can change the number of concurrent loadWorkers based on your business requirements for TPS.
        // By default, the JVM memory size for 100 concurrent loadWorkers is 4 GB. You can change the JVM memory size by modifying the runLoader.sh file. If the number of concurrent loadWorkers is set to 500, we recommend that you set the JVM memory size to 16 GB.
        loadWorkers=100
        
        // The number of concurrent terminals in the TPC-C stress test.
        terminals=1000
        // The duration of the stress test. Unit: minutes.
        runMins=10
        // ---------- default config ------- //
        //To run specified transactions per terminal- runMins must equal zero
        runTxnsPerTerminal=0
        //Number of total transactions per minute
        limitTxnsPerMin=0
        
        //Set to true to run in 4.x compatible mode. Set to false to use the
        //entire configured database evenly.
        terminalWarehouseFixed=true
        
        //The following five values must add up to 100
        //The default percentages of 45, 43, 4, 4 & 4 match the TPC-C spec
        newOrderWeight=45
        paymentWeight=43
        orderStatusWeight=4
        deliveryWeight=4
        stockLevelWeight=4
        
        // Directory name to create for collecting detailed result data.
        // Comment this out to suppress.
        resultDirectory=my_result_%tY-%tm-%td_%tH%tM%tS
        
        // osCollectorScript=./misc/os_collector_linux.py
        // osCollectorInterval=1
        // osCollectorSSHAddr=user@dbhost
        // osCollectorDevices=net_eth0 blk_sda
        Note
        • When you import test data, make sure that the warehouses and loadWorkers parameters are properly configured.
        • When you run a TPC-C test, make sure that the terminals and runMins parameters are properly configured.
    2. Run the stress test
      1. Run the following command on the ECS instance to import the test data:
        cd tpcc/run
        nohup ./runDatabaseBuild.sh props.mysql &
        Note By default, 100 concurrent loadWorkers are used to import a total of more than 500 million rows of data. It takes several hours to import the data. We recommend that you use nohup to push the import task to the background. This can prevent the task from being interrupted due to Secure Shell (SSH) disconnections.
      2. Run the following command to run the TPC-C test:
        cd tpcc/run
        ./runBenchmark.sh props.mysql
        After the test is complete, the following test results are returned:
        08:56:16,844 [Thread-883] INFO   jTPCC : Term-00, Measured tpmC (NewOrders) = 104230.88
        08:56:16,844 [Thread-883] INFO   jTPCC : Term-00, Measured tpmTOTAL = 231664.49
        08:56:16,844 [Thread-883] INFO   jTPCC : Term-00, Session Start     = 2019-09-19 08:54:16
        08:56:16,845 [Thread-883] INFO   jTPCC : Term-00, Session End       = 2019-09-19 08:56:16
        08:56:16,845 [Thread-883] INFO   jTPCC : Term-00, Transaction Count = 465440
        Note The tpmC metric shows the test results. For more information about the test results, see Test results.
      3. Run the following command to clear the test data:
        cd tpcc/run
        ./runDatabaseDestroy.sh props.mysql

Test results

ConcurrencytpmC for Standard Edition instancestpmC for Enterprise Edition instancestpmC for ultra-large instances
1 terminal × 1,000 concurrent loadWorkers65,735.14101,620.8/
6 terminals × 1,000 concurrent loadWorkers//821,547.97