This topic describes how to use TPC Benchmark H (TPC-H) to test the performance of online analytical processing (OLAP) queries and point queries of key-value pairs.
About TPC-H
The following description is quoted from the TPC Benchmark⢠H (TPC-H) specification:
TPC-H is a decision support benchmark that consists of a suite of business-oriented ad hoc queries and concurrent data modifications. The queries and the data populating the database have been chosen to have broad industry-wide relevance. This benchmark illustrates decision support systems that examine large volumes of data, execute queries with a high degree of complexity, and give answers to critical business questions.
For more information, see TPC Benchmark⢠H Standard Specification.
The TPC-H performance tests described in this topic are implemented based on the TPC-H benchmark test but cannot meet all requirements of the TPC-H benchmark test. Therefore, the test results described in this topic cannot be compared with the published results of the TPC-H benchmark test.
Datasets
TPC-H is a test set developed by the Transaction Processing Performance Council (TPC) to simulate decision support systems. TPC-H is used in academia and industries to evaluate the performance of decision support systems.
TPC-H models data in production environments to simulate the data warehouse of a sales system. The data warehouse consists of eight tables. The size of each table ranges from 1 GB to 3 TB. The TPC-H benchmark test involves 22 query statements. The test focuses on the response time of each query, which is the amount of time required from submitting a query to receiving the returned result. The test results can comprehensively reflect the capability of the system to process queries. For more information, see TPC BENCHMARK H Standard Specification.
Scenarios
In this topic, the following tests are performed:
Performance test on OLAP queries: Create column-oriented tables, import data to the tables, and then execute 22 query statements of the TPC-H benchmark test.
Performance test on point queries of key-value pairs: Create a row-oriented table based on the schema defined for the ORDERS table and execute point queries based on the primary key.
Data update test: Test the performance of data updates based on the primary key for the OLAP engine.
The amount of test data affects the test results. The data generation tool provided by TPC-H allows you to change the scale factor (SF) to adjust the amount of the data to be generated. One SF indicates 1 GB of data.
The preceding data amount refers to the amount of raw data. When you prepare the test environment, you must also consider other factors such as the space to be occupied by indexes.
Preparations
You must prepare the basic environment and TPC-H data required for the tests on OLAP queries and point queries of key-value pairs. The following tables describe the steps to be performed to prepare the basic environment and TPC-H data.
To reduce the variables that may have impacts on test results, we recommend that you use new instances each time you perform the tests described in this topic, instead of upgrading or downgrading the specifications of existing instances, to meet the test requirements.
Prepare the basic environment
Step
Description
1. Create an Elastic Compute Service (ECS) instance.
Log on to the Alibaba Cloud Management Console and create an ECS instance. The ECS instance is used to generate data, import data to Hologres, and test clients. We recommend that you create an ECS instance that meets the following conditions:
Instance type: ecs.g6.4xlarge.
Operating system: CentOS 8.4.
Enhanced SSD (ESSD): Determine the capacity based on the amount of the data to be tested.
Region: We recommend that you deploy the ECS instance and Hologres instance in the same region and virtual private cloud (VPC).
For more information about how to create an ECS instance, see Create an instance.
2. Create a Hologres instance.
Create an exclusive Hologres instance that uses the pay-as-you-go billing method. In this example, the instance is used only for testing. Configure 64 CPU cores and 256 GB memory for the instance. You can also configure computing resources based on your business requirements.
Log on to the Hologres console. On the Instances page, click Create Instance.
For more information about how to purchase a Hologres instance, see Purchase a Hologres instance.
3. Create a test database.
Connect to the created Hologres instance and create a database. In this example, a database named tpch_1sf is created.
For more information about how to create a database, see Create a database.
Generate TPC-H data
You must connect to the ECS instance from a remote client, download the data generation tool, compile the data generation code, and then generate the TPC-H data. The following table describes the steps to be performed to prepare the TPC-H data.
Step
Description
1. Prepare the data generation code.
To prepare the data generation code, perform the following operations:
Connect to the ECS instance.
For more information about how to connect to an ECS instance, see Connection methods.
After you connect to the ECS instance, run the
yum update
command to update all repositories.Run the
yum install git
command to install Git.Run the
yum install gcc
command to install GNU Compiler Collection (GCC).Run the
git clone https://github.com/gregrahn/tpch-kit.git
command to download the TPC-H data generation tool.Run the
cd tpch-kit/dbgen
command to go to the directory of the data generation tool.Run the
make
command to compile the data generation code.
2. Generate data.
After the code is compiled, you can run the
./dbgen --help
command to view the parameters in the data generation code. The following examples show you how to generate TPC-H data for tests and view the generated files:In this example, 1 GB of data is required. In this case, you can run the following command to generate data:
./dbgen -vf -s 1
If you need to generate more data, you can set the SF to a greater value. For example, you can run the following command to generate 1 TB of data:
./dbgen -vf -s 1000
In general, 32 compute units (CUs) are sufficient to process TPC-H data whose SF is 10, and 256 CUs are sufficient to process TPC-H data whose SF is 50.
The generated data is stored on the ECS instance. You can run the following command to view the generated files. Eight files are generated. Each file contains the data for a table involved in tests.
ls | grep '.*.tbl'
Performance test on OLAP queries
Before you test the performance of OLAP queries, you must create tables and import data to the tables. To execute queries more effectively, you can collect the characteristics of each table after data is imported. After the compaction at the backend is complete, use a tool such as pgbench to run a performance test on OLAP queries. To run a performance test on OLAP queries, perform the following steps:
Create tables.
Run the following command on the ECS instance to install the PostgreSQL client for data import:
yum install postgresql-server -y
After the PostgreSQL client is installed, run the following command to connect to the Hologres instance. For more information about the parameters in the command, see Connect to a Hologres instance from the PostgreSQL client.
PGUSER=<AccessID> PGPASSWORD=<AccessKey> psql -p <Port> -h <Endpoint> -d <Database>
After you connect to the Hologres instance from the PostgreSQL client, execute the following statements to create the tables required for the test:
DROP TABLE IF EXISTS LINEITEM; BEGIN; CREATE TABLE LINEITEM ( L_ORDERKEY bigint NOT NULL, L_PARTKEY int NOT NULL, L_SUPPKEY int NOT NULL, L_LINENUMBER int NOT NULL, L_QUANTITY DECIMAL(15, 2) NOT NULL, L_EXTENDEDPRICE DECIMAL(15, 2) NOT NULL, L_DISCOUNT DECIMAL(15, 2) NOT NULL, L_TAX DECIMAL(15, 2) NOT NULL, L_RETURNFLAG text NOT NULL, L_LINESTATUS text NOT NULL, L_SHIPDATE date NOT NULL, L_COMMITDATE date NOT NULL, L_RECEIPTDATE date NOT NULL, L_SHIPINSTRUCT text NOT NULL, L_SHIPMODE text NOT NULL, L_COMMENT text NOT NULL, PRIMARY KEY (L_ORDERKEY, L_LINENUMBER) ); CALL set_table_property ('LINEITEM', 'clustering_key', 'L_SHIPDATE,L_ORDERKEY'); CALL set_table_property ('LINEITEM', 'segment_key', 'L_SHIPDATE'); CALL set_table_property ('LINEITEM', 'distribution_key', 'L_ORDERKEY'); CALL set_table_property ('LINEITEM', 'bitmap_columns', 'L_RETURNFLAG,L_LINESTATUS,L_SHIPINSTRUCT,L_SHIPMODE'); CALL set_table_property ('LINEITEM', 'dictionary_encoding_columns', 'l_comment:off,l_returnflag,l_linestatus,l_shipinstruct,l_shipmode'); COMMIT; DROP TABLE IF EXISTS ORDERS; BEGIN; CREATE TABLE ORDERS ( O_ORDERKEY bigint NOT NULL PRIMARY KEY, O_CUSTKEY int NOT NULL, O_ORDERSTATUS text NOT NULL, O_TOTALPRICE DECIMAL(15, 2) NOT NULL, O_ORDERDATE date NOT NULL, O_ORDERPRIORITY text NOT NULL, O_CLERK text NOT NULL, O_SHIPPRIORITY int NOT NULL, O_COMMENT text NOT NULL ); CALL set_table_property ('ORDERS', 'segment_key', 'O_ORDERDATE'); CALL set_table_property ('ORDERS', 'colocate_with', 'lineitem'); CALL set_table_property ('ORDERS', 'distribution_key', 'O_ORDERKEY'); CALL set_table_property ('ORDERS', 'bitmap_columns', 'O_ORDERSTATUS,O_ORDERPRIORITY,O_CLERK,O_SHIPPRIORITY'); CALL set_table_property ('ORDERS', 'dictionary_encoding_columns', 'o_comment:off,o_orderstatus,o_orderpriority,o_clerk'); COMMIT; DROP TABLE IF EXISTS PARTSUPP; BEGIN; CREATE TABLE PARTSUPP ( PS_PARTKEY int NOT NULL, PS_SUPPKEY int NOT NULL, PS_AVAILQTY int NOT NULL, PS_SUPPLYCOST DECIMAL(15, 2) NOT NULL, PS_COMMENT text NOT NULL, PRIMARY KEY (PS_PARTKEY, PS_SUPPKEY) ); CALL set_table_property ('PARTSUPP', 'distribution_key', 'PS_PARTKEY'); CALL set_table_property ('PARTSUPP', 'colocate_with', 'lineitem'); CALL set_table_property ('PARTSUPP', 'bitmap_columns', 'ps_availqty'); CALL set_table_property ('PARTSUPP', 'dictionary_encoding_columns', ''); COMMIT; DROP TABLE IF EXISTS PART; BEGIN; CREATE TABLE PART ( P_PARTKEY int NOT NULL PRIMARY KEY, P_NAME text NOT NULL, P_MFGR text NOT NULL, P_BRAND text NOT NULL, P_TYPE text NOT NULL, P_SIZE int NOT NULL, P_CONTAINER text NOT NULL, P_RETAILPRICE DECIMAL(15, 2) NOT NULL, P_COMMENT text NOT NULL ); CALL set_table_property ('PART', 'distribution_key', 'P_PARTKEY'); CALL set_table_property ('PART', 'colocate_with', 'lineitem'); CALL set_table_property ('PART', 'bitmap_columns', 'P_SIZE,P_MFGR,P_BRAND,P_TYPE,P_CONTAINER'); CALL set_table_property ('PART', 'dictionary_encoding_columns', 'p_name:off,p_comment:off,p_mfgr,p_brand,p_type,p_container'); COMMIT; DROP TABLE IF EXISTS CUSTOMER; BEGIN; CREATE TABLE CUSTOMER ( C_CUSTKEY int NOT NULL PRIMARY KEY, C_NAME text NOT NULL, C_ADDRESS text NOT NULL, C_NATIONKEY int NOT NULL, C_PHONE text NOT NULL, C_ACCTBAL DECIMAL(15, 2) NOT NULL, C_MKTSEGMENT text NOT NULL, C_COMMENT text NOT NULL ); CALL set_table_property ('CUSTOMER', 'distribution_key', 'C_CUSTKEY'); CALL set_table_property ('CUSTOMER', 'colocate_with', 'lineitem'); CALL set_table_property ('CUSTOMER', 'bitmap_columns', 'C_NATIONKEY,C_MKTSEGMENT'); CALL set_table_property ('CUSTOMER', 'dictionary_encoding_columns', 'c_name:off,c_address:off,c_phone:off,c_comment:off,c_mktsegment'); COMMIT; DROP TABLE IF EXISTS SUPPLIER; BEGIN; CREATE TABLE SUPPLIER ( S_SUPPKEY int NOT NULL PRIMARY KEY, S_NAME text NOT NULL, S_ADDRESS text NOT NULL, S_NATIONKEY int NOT NULL, S_PHONE text NOT NULL, S_ACCTBAL DECIMAL(15, 2) NOT NULL, S_COMMENT text NOT NULL ); CALL set_table_property ('SUPPLIER', 'distribution_key', 'S_SUPPKEY'); CALL set_table_property ('SUPPLIER', 'colocate_with', 'lineitem'); CALL set_table_property ('SUPPLIER', 'bitmap_columns', 'S_NATIONKEY'); CALL set_table_property ('SUPPLIER', 'dictionary_encoding_columns', ''); COMMIT; DROP TABLE IF EXISTS NATION; BEGIN; CREATE TABLE NATION ( N_NATIONKEY int NOT NULL PRIMARY KEY, N_NAME text NOT NULL, N_REGIONKEY int NOT NULL, N_COMMENT text NOT NULL ); CALL set_table_property ('NATION', 'distribution_key', 'N_NATIONKEY'); CALL set_table_property ('NATION', 'colocate_with', 'lineitem'); CALL set_table_property ('NATION', 'bitmap_columns', ''); CALL set_table_property ('NATION', 'dictionary_encoding_columns', ''); COMMIT; DROP TABLE IF EXISTS REGION; BEGIN; CREATE TABLE REGION ( R_REGIONKEY int NOT NULL PRIMARY KEY, R_NAME text NOT NULL, R_COMMENT text ); CALL set_table_property ('REGION', 'distribution_key', 'R_REGIONKEY'); CALL set_table_property ('REGION', 'colocate_with', 'lineitem'); CALL set_table_property ('REGION', 'bitmap_columns', ''); CALL set_table_property ('REGION', 'dictionary_encoding_columns', ''); COMMIT;
You can run the
tpch_1sf=# \dt
command on the PostgreSQL client to check whether the tables are created.tpch_1sf=# \dt List of relations Schema | Name | Type | Owner --------+----------+-------+-------------------- public | customer | table | tpch_1sf_developer public | lineitem | table | tpch_1sf_developer public | nation | table | tpch_1sf_developer public | orders | table | tpch_1sf_developer public | part | table | tpch_1sf_developer public | partsupp | table | tpch_1sf_developer public | region | table | tpch_1sf_developer public | supplier | table | tpch_1sf_developer (8 rows)
Import data.
You can import data by using one of the following methods:
In this topic, the
COPY FROM STDIN
statement is executed to import data. For more information about how to use theCOPY FROM STDIN
statement, see Use the COPY statement to import or export data. TPC-H data is imported from the .tbl files generated during preparations to the tables created in HologresYou can also run the following shell script in the directory of the data generation tool to import data. For more information about the parameters in the shell script, see Use the PostgreSQL client to connect to Hologres.
for i in `ls *.tbl`; do echo $i; name=`echo $i| cut -d'.' -f1`; PGUSER=<AccessID> PGPASSWORD=<AccessKey> psql -p <Port> -h <Endpoint> -d <Database> -c "COPY $name from stdin with delimiter '|' csv;" < $i; done
Collect the statistics of tables.
To execute queries more effectively, you can execute the following statements on the PostgreSQL client to collect the characteristics of each table. We recommend that you wait until the compaction at the backend is complete before you execute queries. After the compaction is complete, the CPU load decreases and becomes stable.
-- Collect the statistics of tables. analyze nation; analyze region; analyze lineitem; analyze orders; analyze customer; analyze part; analyze partsupp; analyze supplier; -- Collect statistics on the join keys that are not configured as primary keys. analyze lineitem (l_orderkey,l_partkey,l_suppkey); analyze orders (o_custkey); analyze partsupp(ps_partkey,ps_suppkey);
Execute queries.
Install pgbench, which facilitates statistics collection on queries. You can run the following command to install pgbench:
yum install postgresql-contrib -y
To prevent incompatibility issues, we recommend that you install pgbench V13 or later. If you have locally installed pgbench, make sure that its version is later than V9.6. To view the version of pgbench, run the following command:
pgbench --version
To facilitate queries, you can download the package of 22 TPC-H query statements and upload the package to the ECS instance. For more information about how to upload a file to an ECS instance, see Upload files to ECS instances.
Log on to the ECS instance, go to the directory of the uploaded package, and then run the following shell command to decompress the package:
unzip tpch_query_updated_20210721.zip
Execute TPC-H query statements by using one of the following methods. For more information about the query statements, see TPC-H 22 query statements.
Use pgbench to execute query statements one by one.
The TPC-H benchmark test involves 22 query statements. If you want to execute query statement Q6, specify
6.sql
in the following statement:PGUSER=<AccessID> PGPASSWORD=<AccessKey> PGDATABASE=<Database> pgbench -h <Endpoint> -p <Port> -c <Client_Num> -T <Query_Seconds> -n -f xxx.sql
Run the following shell script to execute 22 query statements at a time and export the results to the hologres_tpch_test.out file:
rm -f hologres_tpch_test.out echo `date +"%Y-%m-%d %H:%M:%S"` begin >> ./hologres_tpch_test.out for i in {1..22} do PGUSER=<AccessID> PGPASSWORD=<AccessKey> PGDATABASE=<Database> pgbench -h <Endpoint> -p <Port> -c <Client_Num> -T <Query_Seconds> -n -f ./tpch_query_updated_20210721/q${i}.sql >> ./hologres_tpch_test.out done
The following table describes the parameters in the shell script.
Parameter
Description
PGUSER
The AccessKey ID of the Alibaba Cloud account.
You can obtain the AccessKey ID from the Security Management page.
PGPASSWORD
The AccessKey secret of the Alibaba Cloud account.
You can obtain the AccessKey secret from the Security Management page.
-h
The endpoint of the Hologres instance.
You can obtain the endpoint of the Hologres instance from the Network Information section of the Instance Details page in the Hologres console.
-p
The port number of the Hologres instance.
You can obtain the port number of the Hologres instance from the Network Information section of the Instance Details page in the Hologres console.
-d
The name of the database in the Hologres instance.
-c
The number of clients to be tested.
In this example, you need to test only query performance, but not the concurrency. Therefore, this parameter is set to 1.
-T
The stress testing duration of each query to be executed by each client. Unit: seconds. In this example, this parameter is set to 300.
-f
The SQL statement to be executed.
You can execute a specified SQL statement defined in a file named in the format of
xxx.sql
. The TPC-H benchmark test involves 22 query statements. For example, query statement Q6 is defined in the6.sql
file.If you want to execute all query statements at a time, you do not need to change the value of this parameter in the script.
For more information about 22 TPC-H query statements, see 22 TPC-H query statements.
Check the query results in the hologres_tpch_test.out file. The
transaction type
parameter indicates the SQL file that was executed. Thelatency average
parameter indicates the average response time of the three queries that were executed based on the SQL file.2021-03-23 03:50:54 begin pghost: hgpostcn-cn-oew21c935002-cn-hangzhou.hologres.aliyuncs.com pgport: 80 nclients: 1 nxacts: 3 dbName: tpch_100 transaction type: ./tpch_data_tpch_query/1.sql scaling factor: 1 query mode: simple number of clients: 1 number of threads: 1 number of transactions per client: 3 number of transactions actually processed: 3/3 latency average = 76.936 ms tps = 12.997850 (including connections establishing) tps = 15.972757 (excluding connections establishing) ...
Performance test on point queries of key-value pairs
Before you test the performance of point queries of key-value pairs, you must create a table and import data to the table. After data is imported, you can generate and execute a query statement to run the test. To run a performance test on point queries of key-value pairs, perform the following steps:
Create a table.
You can create a table based on the schema defined for the ORDERS table in the TPC-H dataset and use the database created for the performance test on OLAP queries to run the performance test on point queries of key-value pairs. After you connect to the Hologres instance from the PostgreSQL client, you can execute the following statements to create the table required for the test. For more information about how to connect to a Hologres instance from the PostgreSQL client, see Connect to a Hologres instance from the PostgreSQL client.
NoteThe performance test on point queries of key-value pairs requires a row-oriented table. Therefore, the tables created for the performance test on OLAP queries cannot be used. You must create a row-oriented table.
DROP TABLE IF EXISTS public.orders_row; BEGIN; CREATE TABLE public.orders_row( O_ORDERKEY BIGINT NOT NULL PRIMARY KEY ,O_CUSTKEY INT NOT NULL ,O_ORDERSTATUS TEXT NOT NULL ,O_TOTALPRICE DECIMAL(15,2) NOT NULL ,O_ORDERDATE TIMESTAMPTZ NOT NULL ,O_ORDERPRIORITY TEXT NOT NULL ,O_CLERK TEXT NOT NULL ,O_SHIPPRIORITY INT NOT NULL ,O_COMMENT TEXT NOT NULL ); CALL SET_TABLE_PROPERTY('public.orders_row', 'orientation', 'row'); CALL SET_TABLE_PROPERTY('public.orders_row', 'clustering_key', 'o_orderkey'); CALL SET_TABLE_PROPERTY('public.orders_row', 'distribution_key', 'o_orderkey'); COMMIT;
Import data.
You can import data by using one of the following methods:
Execute the COPY statement to import data
In this topic, the
COPY FROM STDIN
statement is executed to import data. For more information about how to use theCOPY FROM STDIN
statement, see Use the COPY statement to import or export data. TPC-H data is imported from the .tbl files generated during preparations to the tables created in HologresYou can also run the following shell script in the directory of the data generation tool to import data. For more information about the parameters in the shell script, see Connect to a Hologres instance from the PostgreSQL client.
PGUSER=<AccessID> PGPASSWORD=<AccessKey> psql -p <Port> -h <Endpoint> -d <Database> -c "COPY public.orders_row from stdin with delimiter '|' csv;" < orders.tbl
Execute the INSERT INTO statement to import data
The data for the ORDERS table has been imported during the performance test on OLAP queries. You can run the following SQL statement to import the data to the row-oriented table:
INSERT INTO public.orders_row SELECT * FROM public.orders;
Execute queries.
Generate a query statement.
The following table describes the two types of queries involved in the performance test on point queries of key-value pairs.
Query type
Query statement
Description
Single-value filtering
SELECT column_a ,column_b ,... ,column_x FROM table_x WHERE pk = value_x ;
This query statement applies to single-value filtering where only one primary key value is specified in the
WHERE
clause.Multi-value filtering
SELECT column_a ,column_b ,... ,column_x FROM table_x WHERE pk IN ( value_a, value_b,..., value_x ) ;
This query statement applies to multi-value filtering where multiple primary key values are specified in the
WHERE
clause.You can run the following script to generate the required SQL statement. Two SQL statements are generated. The SQL statement that applies to single-value filtering is exported to the
kv_query_single.sql
file. The SQL statement that applies to multi-value filtering is exported to thekv_query_in.sql
file, where 10 random primary key values are specified in the WHERE clause.rm -rf kv_query mkdir kv_query cd kv_query echo " \set column_values random(1,99999999) select O_ORDERKEY,O_CUSTKEY,O_ORDERSTATUS,O_TOTALPRICE,O_ORDERDATE,O_ORDERPRIORITY,O_CLERK,O_SHIPPRIORITY,O_COMMENT from public.orders_row WHERE o_orderkey =:column_values; " >> kv_query_single.sql echo " \set column_values1 random(1,99999999) \set column_values2 random(1,99999999) \set column_values3 random(1,99999999) \set column_values4 random(1,99999999) \set column_values5 random(1,99999999) \set column_values6 random(1,99999999) \set column_values7 random(1,99999999) \set column_values8 random(1,99999999) \set column_values9 random(1,99999999) select O_ORDERKEY,O_CUSTKEY,O_ORDERSTATUS,O_TOTALPRICE,O_ORDERDATE,O_ORDERPRIORITY,O_CLERK,O_SHIPPRIORITY,O_COMMENT from public.orders_row WHERE o_orderkey in(:column_values1,:column_values2,:column_values3,:column_values4,:column_values5,:column_values6,:column_values7,:column_values8,:column_values9); " >> kv_query_in.sql
Install pgbench, which facilitates statistics collection on queries. You can run the following command to install pgbench:
yum install postgresql-contrib -y
To prevent incompatibility issues, we recommend that you install pgbench V13 or later. If you have locally installed pgbench, make sure that its version is later than V9.6. To view the version of pgbench, run the following command:
pgbench --version
Execute the query statement.
Use pgbench to execute the query statement that adopts single-value filtering. You must execute the following statement in the directory where the SQL file resides. For more information about the parameters in the statement, see the table in the "Performance test on OLAP queries" section.
PGUSER=<AccessID> PGPASSWORD=<AccessKey> PGDATABASE=<Database> pgbench -h <Endpoint> -p <Port> -c <Client_Num> -T <Query_Seconds> -M prepared -n -f kv_query_single.sql
Use pgbench to execute the query statement that adopts multi-value filtering. You must execute the following statement in the directory where the SQL file resides. For more information about the parameters in the statement, see the table in the "Performance test on OLAP queries" section.
PGUSER=<AccessID> PGPASSWORD=<AccessKey> PGDATABASE=<Database> pgbench -h <Endpoint> -p <Port> -c <Client_Num> -T <Query_Seconds> -M prepared -n -f kv_query_in.sql
Data update test
You can test the performance of data updates based on the primary key for the OLAP engine. In this example, a row of data is inserted, or a row of data is updated if a primary key conflict occurs.
Generate a query statement.
echo " \set O_ORDERKEY random(1,99999999) INSERT INTO public.orders_row(o_orderkey,o_custkey,o_orderstatus,o_totalprice,o_orderdate,o_orderpriority,o_clerk,o_shippriority,o_comment) VALUES (:O_ORDERKEY,1,'demo',1.1,'2021-01-01','demo','demo',1,'demo') on conflict(o_orderkey) do update set (o_orderkey,o_custkey,o_orderstatus,o_totalprice,o_orderdate,o_orderpriority,o_clerk,o_shippriority,o_comment)= ROW(excluded.*); " > /root/insert_on_conflict.sql
Execute the query statement to insert data, or update data if a primary key conflict occurs.
PGUSER=<AccessID> PGPASSWORD=<AccessKey> PGDATABASE=<Database> pgbench -h <Endpoint> -p 80 -c <Client_Num> -T <Query_Seconds> -M prepared -n -f /root/insert_on_conflict.sql
Sample results:
transaction type: Custom query scaling factor: 1 query mode: prepared number of clients: 249 number of threads: 1 duration: 60 s number of transactions actually processed: 1923038 tps = 32005.850214 (including connections establishing) tps = 36403.145722 (excluding connections establishing)
Performance test on using Realtime Compute for Apache Flink to write data to Hologres in real time
You can perform this test to evaluate the performance of using Realtime Compute for Apache Flink to write data to Hologres in real time.
Hologres DDL
In this example, a Hologres table that contains 10 columns is used. The
key
column is configured as the primary key. To create this Hologres table, you can execute the following DDL statements:DROP TABLE IF EXISTS flink_insert; BEGIN ; CREATE TABLE IF NOT EXISTS flink_insert( key INT PRIMARY KEY ,value1 TEXT ,value2 TEXT ,value3 TEXT ,value4 TEXT ,value5 TEXT ,value6 TEXT ,value7 TEXT ,value8 TEXT ,value9 TEXT ); CALL SET_TABLE_PROPERTY('flink_insert', 'orientation', 'row'); CALL SET_TABLE_PROPERTY('flink_insert', 'clustering_key', 'key'); CALL SET_TABLE_PROPERTY('flink_insert', 'distribution_key', 'key'); COMMIT;
Scripts for a Flink job
You can use the random number generator provided by fully managed Flink to write data to Hologres. If a primary key conflict occurs, the corresponding row of data is updated. Each row contains data of more than 512 bytes. You can run the following scripts to run a Flink job:
CREATE TEMPORARY TABLE flink_case_1_source ( key INT, value1 VARCHAR, value2 VARCHAR, value3 VARCHAR, value4 VARCHAR, value5 VARCHAR, value6 VARCHAR, value7 VARCHAR, value8 VARCHAR, value9 VARCHAR ) WITH ( 'connector' = 'datagen', -- optional options -- 'rows-per-second' = '1000000000', 'fields.key.min'='1', 'fields.key.max'='2147483647', 'fields.value1.length' = '57', 'fields.value2.length' = '57', 'fields.value3.length' = '57', 'fields.value4.length' = '57', 'fields.value5.length' = '57', 'fields.value6.length' = '57', 'fields.value7.length' = '57', 'fields.value8.length' = '57', 'fields.value9.length' = '57' ); -- Create a Hologres result table. CREATE TEMPORARY TABLE flink_case_1_sink ( key INT, value1 VARCHAR, value2 VARCHAR, value3 VARCHAR, value4 VARCHAR, value5 VARCHAR, value6 VARCHAR, value7 VARCHAR, value8 VARCHAR, value9 VARCHAR ) WITH ( 'connector' = 'hologres', 'dbname'='<yourDbname>', -- The name of the Hologres database. 'tablename'='<yourTablename>', -- The name of the Hologres table to which you want to write data. 'username'='<yourUsername>', --The AccessKey ID of your Alibaba Cloud account. 'password'='<yourPassword>', -- The AccessKey secret of your Alibaba Cloud account. 'endpoint'='<yourEndpoint>', -- The VPC endpoint of the Hologres instance to which you want to connect. 'connectionSize' = '10', -- The default value is 3. 'jdbcWriteBatchSize' = '1024', -- The default value is 256. 'jdbcWriteBatchByteSize' = '2147483647', -- The default value is 20971520. 'mutatetype'='insertorreplace' -- Inserts a row of data, or replaces data in an entire row. ); -- Extract, transform, and load (ETL) data and then write data to Hologres. insert into flink_case_1_sink select key, value1, value2, value3, value4, value5, value6, value7, value8, value9 from flink_case_1_source ;
For more information about the parameters, see Hologres result table.
Sample results
On the Monitoring Information page of the Hologres instance in the Hologres console, you can view the records per second (RPS).
22 TPC-H query statements
The following table provides the links to the 22 TPC-H query statements. To view a specific query statement, you can click the corresponding link in the table.
Item | Query statement | |||
22 TPC-H query statements | ||||
- | - |
Q1
select l_returnflag, l_linestatus, sum(l_quantity) as sum_qty, sum(l_extendedprice) as sum_base_price, sum(l_extendedprice * (1 - l_discount)) as sum_disc_price, sum(l_extendedprice * (1 - l_discount) * (1 + l_tax)) as sum_charge, avg(l_quantity) as avg_qty, avg(l_extendedprice) as avg_price, avg(l_discount) as avg_disc, count(*) as count_order from lineitem where l_shipdate <= date '1998-12-01' - interval '120' day group by l_returnflag, l_linestatus order by l_returnflag, l_linestatus;
Q2
select s_acctbal, s_name, n_name, p_partkey, p_mfgr, s_address, s_phone, s_comment from part, supplier, partsupp, nation, region where p_partkey = ps_partkey and s_suppkey = ps_suppkey and p_size = 48 and p_type like '%STEEL' and s_nationkey = n_nationkey and n_regionkey = r_regionkey and r_name = 'EUROPE' and ps_supplycost = ( select min(ps_supplycost) from partsupp, supplier, nation, region where p_partkey = ps_partkey and s_suppkey = ps_suppkey and s_nationkey = n_nationkey and n_regionkey = r_regionkey and r_name = 'EUROPE' ) order by s_acctbal desc, n_name, s_name, p_partkey limit 100;
Q3
select l_orderkey, sum(l_extendedprice * (1 - l_discount)) as revenue, o_orderdate, o_shippriority from customer, orders, lineitem where c_mktsegment = 'MACHINERY' and c_custkey = o_custkey and l_orderkey = o_orderkey and o_orderdate < date '1995-03-23' and l_shipdate > date '1995-03-23' group by l_orderkey, o_orderdate, o_shippriority order by revenue desc, o_orderdate limit 10;
Q4
select o_orderpriority, count(*) as order_count from orders where o_orderdate >= date '1996-07-01' and o_orderdate < date '1996-07-01' + interval '3' month and exists ( select * from lineitem where l_orderkey = o_orderkey and l_commitdate < l_receiptdate ) group by o_orderpriority order by o_orderpriority;
Q5
select n_name, sum(l_extendedprice * (1 - l_discount)) as revenue from customer, orders, lineitem, supplier, nation, region where c_custkey = o_custkey and l_orderkey = o_orderkey and l_suppkey = s_suppkey and c_nationkey = s_nationkey and s_nationkey = n_nationkey and n_regionkey = r_regionkey and r_name = 'EUROPE' and o_orderdate >= date '1996-01-01' and o_orderdate < date '1996-01-01' + interval '1' year group by n_name order by revenue desc;
Q6
select sum(l_extendedprice * l_discount) as revenue from lineitem where l_shipdate >= date '1996-01-01' and l_shipdate < date '1996-01-01' + interval '1' year and l_discount between 0.02 - 0.01 and 0.02 + 0.01 and l_quantity < 24;
Q7
select supp_nation, cust_nation, l_year, sum(volume) as revenue from ( select n1.n_name as supp_nation, n2.n_name as cust_nation, extract(year from l_shipdate) as l_year, l_extendedprice * (1 - l_discount) as volume from supplier, lineitem, orders, customer, nation n1, nation n2 where s_suppkey = l_suppkey and o_orderkey = l_orderkey and c_custkey = o_custkey and s_nationkey = n1.n_nationkey and c_nationkey = n2.n_nationkey and ( (n1.n_name = 'CANADA' and n2.n_name = 'BRAZIL') or (n1.n_name = 'BRAZIL' and n2.n_name = 'CANADA') ) and l_shipdate between date '1995-01-01' and date '1996-12-31' ) as shipping group by supp_nation, cust_nation, l_year order by supp_nation, cust_nation, l_year;
Q8
select o_year, sum(case when nation = 'BRAZIL' then volume else 0 end) / sum(volume) as mkt_share from ( select extract(year from o_orderdate) as o_year, l_extendedprice * (1 - l_discount) as volume, n2.n_name as nation from part, supplier, lineitem, orders, customer, nation n1, nation n2, region where p_partkey = l_partkey and s_suppkey = l_suppkey and l_orderkey = o_orderkey and o_custkey = c_custkey and c_nationkey = n1.n_nationkey and n1.n_regionkey = r_regionkey and r_name = 'AMERICA' and s_nationkey = n2.n_nationkey and o_orderdate between date '1995-01-01' and date '1996-12-31' and p_type = 'LARGE ANODIZED COPPER' ) as all_nations group by o_year order by o_year;
Q9
select nation, o_year, sum(amount) as sum_profit from ( select n_name as nation, extract(year from o_orderdate) as o_year, l_extendedprice * (1 - l_discount) - ps_supplycost * l_quantity as amount from part, supplier, lineitem, partsupp, orders, nation where s_suppkey = l_suppkey and ps_suppkey = l_suppkey and ps_partkey = l_partkey and p_partkey = l_partkey and o_orderkey = l_orderkey and s_nationkey = n_nationkey and p_name like '%maroon%' ) as profit group by nation, o_year order by nation, o_year desc;
Q10
select c_custkey, c_name, sum(l_extendedprice * (1 - l_discount)) as revenue, c_acctbal, n_name, c_address, c_phone, c_comment from customer, orders, lineitem, nation where c_custkey = o_custkey and l_orderkey = o_orderkey and o_orderdate >= date '1993-02-01' and o_orderdate < date '1993-02-01' + interval '3' month and l_returnflag = 'R' and c_nationkey = n_nationkey group by c_custkey, c_name, c_acctbal, c_phone, n_name, c_address, c_comment order by revenue desc limit 20;
Q11
select ps_partkey, sum(ps_supplycost * ps_availqty) as value from partsupp, supplier, nation where ps_suppkey = s_suppkey and s_nationkey = n_nationkey and n_name = 'EGYPT' group by ps_partkey having sum(ps_supplycost * ps_availqty) > ( select sum(ps_supplycost * ps_availqty) * 0.0001000000 from partsupp, supplier, nation where ps_suppkey = s_suppkey and s_nationkey = n_nationkey and n_name = 'EGYPT' ) order by value desc;
Q12
select l_shipmode, sum(case when o_orderpriority = '1-URGENT' or o_orderpriority = '2-HIGH' then 1 else 0 end) as high_line_count, sum(case when o_orderpriority <> '1-URGENT' and o_orderpriority <> '2-HIGH' then 1 else 0 end) as low_line_count from orders, lineitem where o_orderkey = l_orderkey and l_shipmode in ('FOB', 'AIR') and l_commitdate < l_receiptdate and l_shipdate < l_commitdate and l_receiptdate >= date '1997-01-01' and l_receiptdate < date '1997-01-01' + interval '1' year group by l_shipmode order by l_shipmode;
Q13
select c_count, count(*) as custdist from ( select c_custkey, count(o_orderkey) as c_count from customer left outer join orders on c_custkey = o_custkey and o_comment not like '%special%deposits%' group by c_custkey ) c_orders group by c_count order by custdist desc, c_count desc;
Q14
select 100.00 * sum(case when p_type like 'PROMO%' then l_extendedprice * (1 - l_discount) else 0 end) / sum(l_extendedprice * (1 - l_discount)) as promo_revenue from lineitem, part where l_partkey = p_partkey and l_shipdate >= date '1997-06-01' and l_shipdate < date '1997-06-01' + interval '1' month;
Q15
with revenue0(SUPPLIER_NO, TOTAL_REVENUE) as ( select l_suppkey, sum(l_extendedprice * (1 - l_discount)) from lineitem where l_shipdate >= date '1995-02-01' and l_shipdate < date '1995-02-01' + interval '3' month group by l_suppkey ) select s_suppkey, s_name, s_address, s_phone, total_revenue from supplier, revenue0 where s_suppkey = supplier_no and total_revenue = ( select max(total_revenue) from revenue0 ) order by s_suppkey;
Q16
select p_brand, p_type, p_size, count(distinct ps_suppkey) as supplier_cnt from partsupp, part where p_partkey = ps_partkey and p_brand <> 'Brand#45' and p_type not like 'SMALL ANODIZED%' and p_size in (47, 15, 37, 30, 46, 16, 18, 6) and ps_suppkey not in ( select s_suppkey from supplier where s_comment like '%Customer%Complaints%' ) group by p_brand, p_type, p_size order by supplier_cnt desc, p_brand, p_type, p_size;
Q17
select sum(l_extendedprice) / 7.0 as avg_yearly from lineitem, part where p_partkey = l_partkey and p_brand = 'Brand#51' and p_container = 'WRAP PACK' and l_quantity < ( select 0.2 * avg(l_quantity) from lineitem where l_partkey = p_partkey );
Q18
select c_name, c_custkey, o_orderkey, o_orderdate, o_totalprice, sum(l_quantity) from customer, orders, lineitem where o_orderkey in ( select l_orderkey from lineitem group by l_orderkey having sum(l_quantity) > 312 ) and c_custkey = o_custkey and o_orderkey = l_orderkey group by c_name, c_custkey, o_orderkey, o_orderdate, o_totalprice order by o_totalprice desc, o_orderdate limit 100;
Q19
select sum(l_extendedprice* (1 - l_discount)) as revenue from lineitem, part where ( p_partkey = l_partkey and p_brand = 'Brand#52' and p_container in ('SM CASE', 'SM BOX', 'SM PACK', 'SM PKG') and l_quantity >= 3 and l_quantity <= 3 + 10 and p_size between 1 and 5 and l_shipmode in ('AIR', 'AIR REG') and l_shipinstruct = 'DELIVER IN PERSON' ) or ( p_partkey = l_partkey and p_brand = 'Brand#43' and p_container in ('MED BAG', 'MED BOX', 'MED PKG', 'MED PACK') and l_quantity >= 12 and l_quantity <= 12 + 10 and p_size between 1 and 10 and l_shipmode in ('AIR', 'AIR REG') and l_shipinstruct = 'DELIVER IN PERSON' ) or ( p_partkey = l_partkey and p_brand = 'Brand#52' and p_container in ('LG CASE', 'LG BOX', 'LG PACK', 'LG PKG') and l_quantity >= 21 and l_quantity <= 21 + 10 and p_size between 1 and 15 and l_shipmode in ('AIR', 'AIR REG') and l_shipinstruct = 'DELIVER IN PERSON' );
Q20
select s_name, s_address from supplier, nation where s_suppkey in ( select ps_suppkey from partsupp where ps_partkey in ( select p_partkey from part where p_name like 'drab%' ) and ps_availqty > ( select 0.5 * sum(l_quantity) from lineitem where l_partkey = ps_partkey and l_suppkey = ps_suppkey and l_shipdate >= date '1996-01-01' and l_shipdate < date '1996-01-01' + interval '1' year ) ) and s_nationkey = n_nationkey and n_name = 'KENYA' order by s_name;
Q21
select s_name, count(*) as numwait from supplier, lineitem l1, orders, nation where s_suppkey = l1.l_suppkey and o_orderkey = l1.l_orderkey and o_orderstatus = 'F' and l1.l_receiptdate > l1.l_commitdate and exists ( select * from lineitem l2 where l2.l_orderkey = l1.l_orderkey and l2.l_suppkey <> l1.l_suppkey ) and not exists ( select * from lineitem l3 where l3.l_orderkey = l1.l_orderkey and l3.l_suppkey <> l1.l_suppkey and l3.l_receiptdate > l3.l_commitdate ) and s_nationkey = n_nationkey and n_name = 'PERU' group by s_name order by numwait desc, s_name limit 100;
Q22
select cntrycode, count(*) as numcust, sum(c_acctbal) as totacctbal from ( select substring(c_phone from 1 for 2) as cntrycode, c_acctbal from customer where substring(c_phone from 1 for 2) in ('24', '32', '17', '18', '12', '14', '22') and c_acctbal > ( select avg(c_acctbal) from customer where c_acctbal > 0.00 and substring(c_phone from 1 for 2) in ('24', '32', '17', '18', '12', '14', '22') ) and not exists ( select * from orders where o_custkey = c_custkey ) ) as custsale group by cntrycode order by cntrycode;