The transaction processing of AnalyticDB for PostgreSQL V6.0 is greatly improved over AnalyticDB for PostgreSQL V4.3. This topic describes the test method of the transaction processing capability of an AnalyticDB for PostgreSQL V6.0 instance by using TPC-C.
Introduction
TPC Benchmark C (TPC-C) is approved by the Transaction Processing Performance Council (TPC) to measure the performance of online transaction processing (OLTP) systems. This benchmark portrays the transaction load of a large wholesale supplier. The transaction load is simulated by nine tables that store data of five transaction types: New-Order, Payment, Delivery, Order-Status, and Stock-Level. TPC-C uses a throughput metric transactions per minute (tpm-C) to measure system performance, where the transactions are generated by new orders (New-Order transactions). This metric indicates the number of New-Order transactions executed per minute. In a standard TPC-C test, the number of New-Order transactions accounts for about 45% of the total number of transactions.
Test environment
- Test data
Use open source BenchmarkSQL to generate datasets for 1,000 warehouses. The following table displays the number of data entries in each table.
Table name | Data entries |
---|---|
bmsql_warehouse | 1000 |
bmsql_district | 10000 |
bmsql_customer | 30000000 |
bmsql_history | 30000000 |
bmsql_new_order | 9000000 |
bmsql_oorder | 30000000 |
bmsql_order_line | 299976737 |
bmsql_item | 100000 |
bmsql_stock | 100000000 |
- Test instance
Use an AnalyticDB for PostgreSQL V6.0 instance that contains 16 compute nodes. Each node contains four CPU cores, 32 GB of memory, and uses the high-performance SSD storage. For more information about instance specifications, see Instance specifications.
Create an ECS instance within the same region and VPC as the AnalyticDB for PostgreSQL instance. Run BenchmarkSQL on the ECS instance to generate TPC-C load and send requests to the AnalyticDB for PostgreSQL instance.
- Instance parameters
If you want to obtain high transaction processing performance, execute the following statements to modify related parameters. If you cannot modify certain parameters, contact the AnalyticDB for PostgreSQL personnel on duty.
SQL statement | Description |
---|---|
set optimizer = off | Disable the Orca optimizer used for analytical processing scenarios to ensure higher transaction processing performance. |
set shared_buffers = 8GB | Increase the memory size used for shared buffers to 8 GB. You must restart the instance to validate the modification. |
set wal_buffers = 256MB | Increase the memory size used for write-ahead logging (WAL) data that has not been written to disk to 256 MB. You must restart the instance to validate the modification. |
set log_statement = none | Disable SQL statement logging. |
set random_page_cost = 10 | Reduce the cost of random access to 10 so that the system prefers index scans during queries. |
set gp_resqueue_priority = off
set resource_scheduler = off |
Disable the resource queue of the AnalyticDB for PostgreSQL instance. You must restart the instance to validate the modification. |
Test results
The following figure shows the performance of the AnalyticDB for PostgreSQL V6.0 instance under different concurrency levels. The X axis indicates the concurrency level and the Y axis indicates the performance (unit: tpm-C). The instance performance can reach 101,231.3 tpm-C.

Use BenchmarkSQL for TPC-C
- Download the tool
Download BenchmarkSQL of AnalyticDB for PostgreSQL. Compared with open source BenchmarkSQL, AnalyticDB for PostgreSQL BenchmarkSQL has the following benefits: AnalyticDB for PostgreSQL BenchmarkSQL specifies partition keys in table creation statements. This feature helps to evenly distribute data and include less compute nodes in two-phase transactions to reduce the communication overhead of distributed transactions. AnalyticDB for PostgreSQL also fixes data inconsistency between data in CSV files and data specified in table creation statements when CSV datasets are generated.
- Generate and load data
- Generate data
Run the following command to generate CSV data for tables used in TPC-C and run the COPY command to load data to databases.
# adbpg.properties is the configuration file.
./runLoader.sh adbpg.properties
Before you run the command, you must modify the following parameters in adbpg.properties:
db=postgres // Use the default postgres database.
driver=org.postgresql.Driver // Use org.postgresql.Drive.
conn=jdbc:postgresql://xxxxx.xxxx.xxxx.rds.aliyuncs.com:3432/benchmarksql // Specify the endpoint of the instance.
user=tpcc_test // Specify the user to connect to the instance.
password=your_password // Specify the password of the user.
warehouses=1000 // Specify the number of warehouses for which to generate datasets.
loadWorkers=16 // Specify the number of threads to generate data. If your CPU resources and memory size are sufficient, you can specify a larger value to generate data more quickly.
fileLocation=/data/tpcc_1000w/ // Specify the directory to store the generated data.
2. Load data
(1) Create tables: Execute the tableCreates.sql script provided by BenchmarkSQL.
# Run the Copy command to import data to databases.
psql -h conn_addr -d benchmarksql -p your_port -U tpcc_test -c "\copy bmsql_history from '/data/tpcc_1000w/cust-hist.csv' with delimiter ',' null '';"
(3) Create indexes: Execute the indexCreates.sql script provided by BenchmarkSQL.
(4) Execute ANALYZE statements: After the data is loaded and indexes are created, you must execute ANALYZE statements for all tables to enable the AnalyticDB for PostgreSQL optimizers to make execution plans.
- Start the test
Run the following command to execute the following script. adbpg.properties indicates the configuration file.
./runBenchmark.sh adbpg.properties
Before you run the command, you must modify the following parameters in adbpg.properties:
db=postgres // Use the default postgres database.
driver=org.postgresql.Driver // Use org.postgresql.Drive.
conn=jdbc:postgresql://xxxxx.xxxx.xxxx.rds.aliyuncs.com:3432/benchmarksql // Specify the endpoint of the instance.
user=tpcc_test // Specify the user to connect to the instance.
password=your_password // Specify the password of the user.
terminals=128 // Specify the concurrency of TPC-C.
//To run specified transactions per terminal- runMins must equal zero
runTxnsPerTerminal=0 // Set the value to 0.
//To run for specified minutes- runTxnsPerTerminal must equal zero
runMins=10 // Specify the running time of TPC-C. Unit: minutes.
//Number of total transactions per minute
limitTxnsPerMin=30000000 // Set the value to 30000000.
// Set the proportions of transactions to the following values.
//The following five values must add up to 100
//The default percentages of 45, 43, 4, 4 & 4 match the TPC-C spec
newOrderWeight=45
paymentWeight=43
orderStatusWeight=4
deliveryWeight=4
stockLevelWeight=4
After you have run the script, the number of transactions processed by the database is displayed in real time. After the test runs for the running time specified by runMins, the following database performance is displayed:
Term-00, Running Average tpmTOTAL: 225114.24 Current tpmTOTAL: 7445652 Memory Usage: 116MB / 168MB18:28:56,388 [Thread-127] INFO jTPCC : Term-00,18:28:56,388 [Thread-127] INFO jTPCC : Term-00,18:28:56,388 [Thread-127] INFO jTPCC : Term-00, Measured tpmC (NewOrders) = 101231.3318:28:56,388 [Thread-127] INFO jTPCC : Term-00, Measured tpmTOTAL = 224908.1418:28:56,388 [Thread-127] INFO jTPCC : Term-00, Session Start = 2020-02-24 18:23:5618:28:56,388 [Thread-127] INFO jTPCC : Term-00, Session End = 2020-02-24 18:28:5618:28:56,388 [Thread-127] INFO jTPCC : Term-00, Transaction Count = 1125698