ApsaraDB for SelectDB is designed to provide high-performance and easy-to-use data analytics services. It delivers excellent performance in scenarios such as wide table aggregation, multi-table joins, and high-concurrency point queries. This topic describes the TPC-H standard testing methods and results for SelectDB.
Overview
TPC-H is a decision support benchmark. It consists of a suite of business-oriented ad-hoc queries and concurrent data modifications. The data that is used has broad industry relevance. The benchmark uses a series of query operations to assess a database system's performance in handling complex queries and data mining tasks. The performance metric reported by TPC-H is the TPC-H Composite Query-per-Hour Performance Metric (QphH@Size). This metric reflects multiple aspects of the query processing capability of the system. These aspects include the database size selected for query execution, the query processing power for queries submitted by a single stream, and the query throughput for queries submitted by multiple concurrent users.
The TPC-H implementation in this topic is based on the TPC-H benchmark but does not meet all of its requirements. The test results are not equivalent to and cannot be compared with official TPC-H benchmark results.
Standard benchmarks such as TPC-H often differ from real-world business scenarios. Some tests also involve parameter tuning for the specific benchmark. Therefore, these results reflect database performance only in specific scenarios. We recommend that you conduct tests using your business data.
Preparations
Step 1: Prepare a destination instance
Prepare an instance.
If you have a destination instance, ensure that its configuration meets the following requirements.
If you do not have a destination instance, you can create one.
The instance used for this performance test must meet the following requirements:
The Kernel version must be 4.1 or later.
If you have an instance but its kernel version is earlier than 4.1, you must upgrade the kernel version. For more information, see Upgrade the kernel version.
The Specifications must be 96 cores and 384 GB of memory or greater. This test uses an instance with 96 cores and 384 GB of memory.
The Cluster cache must be 1200 GB or larger. This test uses a 1200 GB cluster cache.
Set the
streaming_load_max_mbparameter to its maximum value.During the test, the tool uploads the test data to SelectDB using Stream Load. The test data is large and exceeds the default maximum import limit of 10240 MB for Stream Load. You must set the `streaming_load_max_mb` parameter of the BE to its maximum value of 10240000 MB. For more information about how to modify parameters, see Configure parameters.
Create a destination database for the test data.
If you already have a destination database, you can skip this step.
Connect to the instance. For more information, see Connect to an ApsaraDB for SelectDB instance using a MySQL client.
Create the database.
The destination database for this test is test_db. The SQL statement is as follows:
CREATE DATABASE test_db;
Step 2: Prepare a test server
The following scripts for installing dependencies are applicable only to Linux operating systems. If your server uses a different operating system, you must modify the scripts as needed.
Usage notes
Note the following about your server:
If you plan to use Git to download the TPC-H test tool, you must enable public network access for the server.
New ECS instance: When you create an ECS instance, select Assign Public IPv4 Address for Public IP.
Existing ECS instance without public network access: To enable public network access for the ECS instance, see Enable public network access.
The data files generated for this test dataset are approximately 1000 GB. Ensure that the server has sufficient memory.
Procedure
Create a destination server.
If you already have a destination server, you can skip this step.
If you do not have a destination server, you can create a custom ECS instance and select Alibaba Cloud Linux as the image.
Install the required dependencies.
Install the MySQL client.
yum install mysqlInstall unzip.
yum install unzip
(Optional) Install Git.
This test uses Git to download the TPC-H tool. If you have already obtained the TPC-H tool using other methods and plan to upload it to the server manually, you can skip this step.
yum install git
Step 3: Ensure network connectivity
Ensure that the destination server on which the TPC-H test tool will be installed can communicate with the SelectDB instance.
Apply for a public endpoint for the SelectDB instance. For more information, see Apply for and release a public endpoint.
If the destination server is an Alibaba Cloud server and is in the same VPC as the ApsaraDB for SelectDB instance, you can skip this step.
Add the IP address of the destination server to the whitelist of the ApsaraDB for SelectDB instance. For more information, see Configure a whitelist.
Step 4: Understand the test dataset
In this test, TPC-H generates 1000 GB of data and imports the data into SelectDB to test the performance of SelectDB. The following table describes the data tables that contain the 1000 GB test dataset.
TPC-H table name | Number of rows | Remarks |
REGION | 5 | Region table |
NATION | 25 | Nation table |
SUPPLIER | 10 million | Supplier table |
PART | 200 million | Part table |
PARTSUPP | 800 million | Part supply table |
CUSTOMER | 150 million | Customer table |
ORDERS | 1.5 billion | Orders table |
LINEITEM | 6 billion | Order details table |
Procedure
The following scripts are applicable only to Linux operating systems. If your server uses a different operating system, you must modify the scripts as needed.
Step 1: Log on to the destination server
If your server is an Alibaba Cloud ECS instance, see Connect to an ECS instance for instructions about how to log on.
For other types of servers, see their respective product documentation.
Step 2: Download and install the TPC-H data generation tool
Download the tool.
This test uses Git to download the tool. The script is as follows:
git clone https://github.com/apache/doris.git && cd ./doris/tools/tpch-toolsYou can also download the tool from tpch-tools and manually upload it to the destination server.
Build tools.
Run the following script to compile the tool.
sh bin/build-tpch-dbgen.sh
Step 3: Generate the TPC-H test dataset
The time it takes to generate the data increases with the data volume and depends on the performance of the server.
Run the script in the installation directory of the test tool to generate the test dataset.
The syntax is as follows:
sh bin/gen-tpch-data.sh -s <yourAimDataNum>The parameter is described as follows:
yourAimDataNum:
Meaning: The size of the data to be generated by TPC-H.
Unit: GB
This is a medium-scale test that requires you to generate a 1000 GB (1 TB) test dataset. This step may take a long time. We recommend that you run this task in the background. The command is as follows:
nohup sh bin/gen-tpch-data.sh -s 1000 > gen-tpch-data.log 2>&1 &The execution results are saved in the gen-tpch-data.log file in the installation directory of the test tool. You can view this file to verify that the process ran correctly.
The test dataset is saved in the tpch-data directory within the bin directory of the installation directory of the test tool. The data files have a .tbl suffix.
Step 4: Use a script to create test tables in SelectDB
Configure the SelectDB instance in the
doris-cluster.conffile.Before you run the table creation script, you must configure the SelectDB instance information in the
doris-cluster.conffile. This file is located in thetpch-tools/conf/directory of the installation directory of the test tool. The following is an example:Any of FE host export FE_HOST='selectdb-cn-****.selectdbfe.rds.aliyuncs.com' # http_port in fe.conf export FE_HTTP_PORT=8080 # query_port in fe.conf export FE_QUERY_PORT=9030 # Doris username export USER='admin' # Doris password export PASSWORD='****' # The database where TPC-H tables located export DB='test_db'The parameters are described as follows:
Parameter
Description
FE_HOST
The endpoint of the SelectDB instance.
You can get the VPC endpoint or public endpoint from the Network Information section on the instance details page in the SelectDB console.
FE_HTTP_PORT
The HTTP protocol port of the SelectDB instance.
The default port for SelectDB is 8080.
You can get the HTTP protocol port from the Network Information section on the instance details page in the SelectDB console.
FE_QUERY_PORT
The MySQL protocol port of the SelectDB instance.
The default port for SelectDB is 9030.
You can get the MySQL protocol port from the Network Information section on the instance details page in the SelectDB console.
USER
The account for the SelectDB instance.
After you create a SelectDB instance, the system creates an admin account by default.
PASSWORD
The password for the SelectDB instance account.
If you set USER to the admin account but have forgotten the password, you can reset the admin password in the console.
DB
The name of the database in the SelectDB instance where the data will be imported.
Create the tables.
In the installation directory of the test tool, run the following script to create the test tables. After the script is run, the tables that are described in Step 4: Understand the test dataset are created in the destination database of SelectDB.
sh bin/create-tpch-tables.sh -s 1000
Step 5: Import data into SelectDB
The time it takes to import the data increases with the data volume and depends on the performance of the server.
In the installation directory of the test tool, run the following script to import all data from the TPC-H test set into SelectDB.
sh bin/load-tpch-data.shThis is a medium-scale test that requires you to import the generated 1000 GB (1 TB) test dataset into SelectDB. This step may take a long time. We recommend that you run this task in the background. The command is as follows:
nohup sh bin/load-tpch-data.sh > load-tpch-data.log 2>&1 &The execution results are saved in the load-tpch-data.log file in the installation directory of the test tool. You can view this file to verify that the process ran correctly.
Step 6: Test query performance
Test batch SQL query performance
ImportantThe time it takes to run the batch test increases with the data volume and depends on the performance of the server.
You can run the TPC-H test SQL script to run the queries in the test set in a batch.
The syntax is as follows:
sh bin/run-tpch-queries.sh -s <yourAimDataNum>The parameter is described as follows:
yourAimDataNum: Specifies the dataset size to ensure that the query runs against the correct dataset. This value must be the same as the scale that is used to generate the data. For example, if you used
-s 1000to generate data, you must also use-s 1000to run queries.After the script is run, the console window displays the performance of each SQL query in the test set on SelectDB.
This is a medium-scale test that requires you to query a 1000 GB (1 TB) test dataset. This step may take a long time. We recommend that you run this task in the background. The command is as follows:
nohup sh bin/run-tpch-queries.sh -s 1000 > run-tpch-queries.log 2>&1 &For more information about the SQL queries that are tested in the batch, see TPCH-Query-SQL.
NoteThe query optimizer and statistics information features of SelectDB are still being improved. For this reason, we have rewritten some queries in TPC-H to adapt to the SelectDB execution framework. These changes do not affect the correctness of the results.
The query performance results are saved in the run-tpch-queries.log file in the installation directory of the test tool. You can view this file to verify that the query process ran correctly and to view the test results. For the test results on 1000 GB of data in this topic, see Test results.
Test single SQL query performance
You can also test the performance of a specific SQL query on SelectDB. To do so, perform the following steps:
Connect to the SelectDB instance. For more information, see Connect to an ApsaraDB for SelectDB instance using DMS.
Run the target SQL statement.
You can obtain the target SQL statement from TPC-H test query statements and run it.
You can also select and run one of the SQL statements that are used in this test.
--Q1 select l_returnflag, l_linestatus, sum(l_quantity) as sum_qty, sum(l_extendedprice) as sum_base_price, sum(l_extendedprice * (1 - l_discount)) as sum_disc_price, sum(l_extendedprice * (1 - l_discount) * (1 + l_tax)) as sum_charge, avg(l_quantity) as avg_qty, avg(l_extendedprice) as avg_price, avg(l_discount) as avg_disc, count(*) as count_order from lineitem where l_shipdate <= date '1998-12-01' - interval '90' day group by l_returnflag, l_linestatus order by l_returnflag, l_linestatus; --Q2 select s_acctbal, s_name, n_name, p_partkey, p_mfgr, s_address, s_phone, s_comment from part, supplier, partsupp, nation, region where p_partkey = ps_partkey and s_suppkey = ps_suppkey and p_size = 15 and p_type like '%BRASS' and s_nationkey = n_nationkey and n_regionkey = r_regionkey and r_name = 'EUROPE' and ps_supplycost = ( select min(ps_supplycost) from partsupp, supplier, nation, region where p_partkey = ps_partkey and s_suppkey = ps_suppkey and s_nationkey = n_nationkey and n_regionkey = r_regionkey and r_name = 'EUROPE' ) order by s_acctbal desc, n_name, s_name, p_partkey limit 100; --Q3 select l_orderkey, sum(l_extendedprice * (1 - l_discount)) as revenue, o_orderdate, o_shippriority from customer, orders, lineitem where c_mktsegment = 'BUILDING' and c_custkey = o_custkey and l_orderkey = o_orderkey and o_orderdate < date '1995-03-15' and l_shipdate > date '1995-03-15' group by l_orderkey, o_orderdate, o_shippriority order by revenue desc, o_orderdate limit 10; --Q4 select o_orderpriority, count(*) as order_count from orders where o_orderdate >= date '1993-07-01' and o_orderdate < date '1993-07-01' + interval '3' month and exists ( select * from lineitem where l_orderkey = o_orderkey and l_commitdate < l_receiptdate ) group by o_orderpriority order by o_orderpriority; --Q5 select n_name, sum(l_extendedprice * (1 - l_discount)) as revenue from customer, orders, lineitem, supplier, nation, region where c_custkey = o_custkey and l_orderkey = o_orderkey and l_suppkey = s_suppkey and c_nationkey = s_nationkey and s_nationkey = n_nationkey and n_regionkey = r_regionkey and r_name = 'ASIA' and o_orderdate >= date '1994-01-01' and o_orderdate < date '1994-01-01' + interval '1' year group by n_name order by revenue desc; --Q6 select sum(l_extendedprice * l_discount) as revenue from lineitem where l_shipdate >= date '1994-01-01' and l_shipdate < date '1994-01-01' + interval '1' year and l_discount between .06 - 0.01 and .06 + 0.01 and l_quantity < 24; --Q7 select supp_nation, cust_nation, l_year, sum(volume) as revenue from ( select n1.n_name as supp_nation, n2.n_name as cust_nation, extract(year from l_shipdate) as l_year, l_extendedprice * (1 - l_discount) as volume from supplier, lineitem, orders, customer, nation n1, nation n2 where s_suppkey = l_suppkey and o_orderkey = l_orderkey and c_custkey = o_custkey and s_nationkey = n1.n_nationkey and c_nationkey = n2.n_nationkey and ( (n1.n_name = 'FRANCE' and n2.n_name = 'GERMANY') or (n1.n_name = 'GERMANY' and n2.n_name = 'FRANCE') ) and l_shipdate between date '1995-01-01' and date '1996-12-31' ) as shipping group by supp_nation, cust_nation, l_year order by supp_nation, cust_nation, l_year; --Q8 select o_year, sum(case when nation = 'BRAZIL' then volume else 0 end) / sum(volume) as mkt_share from ( select extract(year from o_orderdate) as o_year, l_extendedprice * (1 - l_discount) as volume, n2.n_name as nation from part, supplier, lineitem, orders, customer, nation n1, nation n2, region where p_partkey = l_partkey and s_suppkey = l_suppkey and l_orderkey = o_orderkey and o_custkey = c_custkey and c_nationkey = n1.n_nationkey and n1.n_regionkey = r_regionkey and r_name = 'AMERICA' and s_nationkey = n2.n_nationkey and o_orderdate between date '1995-01-01' and date '1996-12-31' and p_type = 'ECONOMY ANODIZED STEEL' ) as all_nations group by o_year order by o_year; --Q9 select nation, o_year, sum(amount) as sum_profit from ( select n_name as nation, extract(year from o_orderdate) as o_year, l_extendedprice * (1 - l_discount) - ps_supplycost * l_quantity as amount from part, supplier, lineitem, partsupp, orders, nation where s_suppkey = l_suppkey and ps_suppkey = l_suppkey and ps_partkey = l_partkey and p_partkey = l_partkey and o_orderkey = l_orderkey and s_nationkey = n_nationkey and p_name like '%green%' ) as profit group by nation, o_year order by nation, o_year desc; --Q10 select c_custkey, c_name, sum(l_extendedprice * (1 - l_discount)) as revenue, c_acctbal, n_name, c_address, c_phone, c_comment from customer, orders, lineitem, nation where c_custkey = o_custkey and l_orderkey = o_orderkey and o_orderdate >= date '1993-10-01' and o_orderdate < date '1993-10-01' + interval '3' month and l_returnflag = 'R' and c_nationkey = n_nationkey group by c_custkey, c_name, c_acctbal, c_phone, n_name, c_address, c_comment order by revenue desc limit 20; --Q11 select ps_partkey, sum(ps_supplycost * ps_availqty) as value from partsupp, supplier, nation where ps_suppkey = s_suppkey and s_nationkey = n_nationkey and n_name = 'GERMANY' group by ps_partkey having sum(ps_supplycost * ps_availqty) > ( select sum(ps_supplycost * ps_availqty) * 0.000002 from partsupp, supplier, nation where ps_suppkey = s_suppkey and s_nationkey = n_nationkey and n_name = 'GERMANY' ) order by value desc; --Q12 select l_shipmode, sum(case when o_orderpriority = '1-URGENT' or o_orderpriority = '2-HIGH' then 1 else 0 end) as high_line_count, sum(case when o_orderpriority <> '1-URGENT' and o_orderpriority <> '2-HIGH' then 1 else 0 end) as low_line_count from orders, lineitem where o_orderkey = l_orderkey and l_shipmode in ('MAIL', 'SHIP') and l_commitdate < l_receiptdate and l_shipdate < l_commitdate and l_receiptdate >= date '1994-01-01' and l_receiptdate < date '1994-01-01' + interval '1' year group by l_shipmode order by l_shipmode; --Q13 select c_count, count(*) as custdist from ( select c_custkey, count(o_orderkey) as c_count from customer left outer join orders on c_custkey = o_custkey and o_comment not like '%special%requests%' group by c_custkey ) as c_orders group by c_count order by custdist desc, c_count desc; --Q14 select 100.00 * sum(case when p_type like 'PROMO%' then l_extendedprice * (1 - l_discount) else 0 end) / sum(l_extendedprice * (1 - l_discount)) as promo_revenue from lineitem, part where l_partkey = p_partkey and l_shipdate >= date '1995-09-01' and l_shipdate < date '1995-09-01' + interval '1' month; --Q15 select s_suppkey, s_name, s_address, s_phone, total_revenue from supplier, revenue0 where s_suppkey = supplier_no and total_revenue = ( select max(total_revenue) from revenue0 ) order by s_suppkey; --Q16 select p_brand, p_type, p_size, count(distinct ps_suppkey) as supplier_cnt from partsupp, part where p_partkey = ps_partkey and p_brand <> 'Brand#45' and p_type not like 'MEDIUM POLISHED%' and p_size in (49, 14, 23, 45, 19, 3, 36, 9) and ps_suppkey not in ( select s_suppkey from supplier where s_comment like '%Customer%Complaints%' ) group by p_brand, p_type, p_size order by supplier_cnt desc, p_brand, p_type, p_size; --Q17 select sum(l_extendedprice) / 7.0 as avg_yearly from lineitem, part where p_partkey = l_partkey and p_brand = 'Brand#23' and p_container = 'MED BOX' and l_quantity < ( select 0.2 * avg(l_quantity) from lineitem where l_partkey = p_partkey ); --Q18 select c_name, c_custkey, o_orderkey, o_orderdate, o_totalprice, sum(l_quantity) from customer, orders, lineitem where o_orderkey in ( select l_orderkey from lineitem group by l_orderkey having sum(l_quantity) > 300 ) and c_custkey = o_custkey and o_orderkey = l_orderkey group by c_name, c_custkey, o_orderkey, o_orderdate, o_totalprice order by o_totalprice desc, o_orderdate limit 100; --Q19 select sum(l_extendedprice* (1 - l_discount)) as revenue from lineitem, part where ( p_partkey = l_partkey and p_brand = 'Brand#12' and p_container in ('SM CASE', 'SM BOX', 'SM PACK', 'SM PKG') and l_quantity >= 1 and l_quantity <= 1 + 10 and p_size between 1 and 5 and l_shipmode in ('AIR', 'AIR REG') and l_shipinstruct = 'DELIVER IN PERSON' ) or ( p_partkey = l_partkey and p_brand = 'Brand#23' and p_container in ('MED BAG', 'MED BOX', 'MED PKG', 'MED PACK') and l_quantity >= 10 and l_quantity <= 10 + 10 and p_size between 1 and 10 and l_shipmode in ('AIR', 'AIR REG') and l_shipinstruct = 'DELIVER IN PERSON' ) or ( p_partkey = l_partkey and p_brand = 'Brand#34' and p_container in ('LG CASE', 'LG BOX', 'LG PACK', 'LG PKG') and l_quantity >= 20 and l_quantity <= 20 + 10 and p_size between 1 and 15 and l_shipmode in ('AIR', 'AIR REG') and l_shipinstruct = 'DELIVER IN PERSON' ); --Q20 select s_name, s_address from supplier, nation where s_suppkey in ( select ps_suppkey from partsupp where ps_partkey in ( select p_partkey from part where p_name like 'forest%' ) and ps_availqty > ( select 0.5 * sum(l_quantity) from lineitem where l_partkey = ps_partkey and l_suppkey = ps_suppkey and l_shipdate >= date '1994-01-01' and l_shipdate < date '1994-01-01' + interval '1' year ) ) and s_nationkey = n_nationkey and n_name = 'CANADA' order by s_name; --Q21 select s_name, count(*) as numwait from supplier, lineitem l1, orders, nation where s_suppkey = l1.l_suppkey and o_orderkey = l1.l_orderkey and o_orderstatus = 'F' and l1.l_receiptdate > l1.l_commitdate and exists ( select * from lineitem l2 where l2.l_orderkey = l1.l_orderkey and l2.l_suppkey <> l1.l_suppkey ) and not exists ( select * from lineitem l3 where l3.l_orderkey = l1.l_orderkey and l3.l_suppkey <> l1.l_suppkey and l3.l_receiptdate > l3.l_commitdate ) and s_nationkey = n_nationkey and n_name = 'SAUDI ARABIA' group by s_name order by numwait desc, s_name limit 100; --Q22 select cntrycode, count(*) as numcust, sum(c_acctbal) as totacctbal from ( select substring(c_phone, 1, 2) as cntrycode, c_acctbal from customer where substring(c_phone, 1, 2) in ('13', '31', '23', '29', '30', '18', '17') and c_acctbal > ( select avg(c_acctbal) from customer where c_acctbal > 0.00 and substring(c_phone, 1, 2) in ('13', '31', '23', '29', '30', '18', '17') ) and not exists ( select * from orders where o_custkey = c_custkey ) ) as custsale group by cntrycode order by cntrycode;
Test results
The following table shows the TPC-H 1000 GB query performance test results. The test was run on a SelectDB instance that has kernel version 4.1.1, 96 cores, 384 GB of memory, and a 1200 GB cluster cache.
Query | TPC-H 1000 GB (s) |
Q1 | 7.04 |
Q2 | 0.16 |
Q3 | 1.73 |
Q4 | 0.99 |
Q5 | 3.42 |
Q6 | 0.16 |
Q7 | 1.04 |
Q8 | 1.89 |
Q9 | 8.93 |
Q10 | 2.66 |
Q11 | 0.4 |
Q12 | 0.35 |
Q13 | 5.33 |
Q14 | 0.37 |
Q15 | 0.94 |
Q16 | 0.71 |
Q17 | 0.46 |
Q18 | 9.36 |
Q19 | 0.76 |
Q20 | 0.33 |
Q21 | 2.57 |
Q22 | 1.87 |
Total | 51.47 |