Topik ini menjelaskan cara menggunakan TPC-H, sebuah benchmark decision support, untuk menjalankan pengujian performa pada skenario kueri OLAP dan skenario kueri titik Key/Value.
Pengenalan TPC-H
Deskripsi berikut dikutip dari spesifikasi TPC Benchmark™ H (TPC-H):
"TPC-H adalah benchmark decision support yang terdiri atas serangkaian kueri ad-hoc berorientasi bisnis dan modifikasi data secara konkuren. Kueri serta data yang mengisi database dipilih karena relevansinya yang luas di berbagai industri. Benchmark ini menggambarkan sistem decision support yang memproses volume data besar, mengeksekusi kueri dengan kompleksitas tinggi, dan memberikan jawaban atas pertanyaan bisnis kritis."
Untuk informasi lebih lanjut, lihat Spesifikasi TPC-H.
Implementasi TPC-H dalam topik ini didasarkan pada Pengujian benchmark TPC-H. Hasilnya tidak dapat dibandingkan dengan hasil benchmark TPC-H yang telah dipublikasikan karena pengujian ini tidak memenuhi seluruh persyaratan resmi TPC-H.
Pengenalan dataset
TPC-H adalah test set yang dikembangkan oleh Transaction Processing Performance Council (TPC) untuk mensimulasikan aplikasi decision support. Benchmark ini banyak digunakan di dunia akademis dan industri untuk mengevaluasi performa teknologi decision support.
TPC-H memodelkan lingkungan produksi nyata dan mensimulasikan gudang data untuk sistem penjualan. Benchmark ini mencakup delapan tabel dengan volume data yang dapat diskalakan dari 1 GB hingga 3 TB, serta 22 kueri. Metode evaluasi utama adalah waktu respons untuk setiap kueri, yaitu durasi sejak pengiriman kueri hingga pengambilan hasil. Hasil pengujian mencerminkan kemampuan pemrosesan kueri keseluruhan sistem. Untuk informasi lebih lanjut, lihat Benchmark TPC-H.
Deskripsi skenario
Skenario pengujian ini mencakup bagian-bagian berikut:
-
Pengujian skenario kueri OLAP: Menggunakan tabel berorientasi kolom dan menjalankan 22 pernyataan kueri dari pengujian TPC-H.
-
Pengujian skenario kueri titik Key/Value: Menggunakan tabel berorientasi baris dan melakukan kueri titik dengan filter primary key pada tabel ORDERS.
-
Skenario pembaruan data: Menguji performa pembaruan data engine OLAP ketika primary key tersedia.
Volume data secara langsung memengaruhi hasil pengujian. Tool pembuat TPC-H menggunakan faktor skala (SF) untuk mengontrol ukuran data yang dihasilkan. 1 SF setara dengan 1 GB.
Volume data yang disebutkan hanya berlaku untuk raw data. Volume tersebut tidak termasuk ruang untuk indeks. Oleh karena itu, siapkan ruang tambahan saat Anda menyiapkan lingkungan.
Catatan
Untuk mengurangi variabel yang mungkin memengaruhi hasil pengujian, buat instans baru untuk setiap pengujian. Jangan gunakan instans yang telah ditingkatkan atau diturunkan spesifikasinya.
Pengujian skenario kueri OLAP
-
Persiapan
Siapkan lingkungan dasar untuk skenario kueri OLAP.
-
Buat instans Hologres. Untuk informasi lebih lanjut, lihat Beli instans Hologres. Pengujian ini menggunakan instans dedicated pay-as-you-go. Karena instans hanya digunakan untuk pengujian, sumber daya komputasi diatur menjadi 96 core dan 384 GB. Pilih spesifikasi sumber daya komputasi sesuai kebutuhan bisnis Anda.
-
Buat instans ECS. Untuk informasi lebih lanjut, lihat Buat instans ECS. Tipe instans ECS yang digunakan dalam topik ini adalah sebagai berikut:
Parameter
Spesifikasi
Specification
ecs.g6.4xlarge
Image
Alibaba Cloud Linux 3.2104 LTS 64-bit
Data disk
Tipe: enterprise SSD. Kapasitas data spesifik tergantung pada volume data pengujian.
-
-
Unduh dan konfigurasikan toolkit pengujian Hologres Benchmark.
-
Login ke instans ECS. Untuk informasi lebih lanjut, lihat Hubungkan ke instans ECS.
-
Instal client PSQL.
yum update -y yum install postgresql-server -y yum install postgresql-contrib -y -
Unduh dan ekstrak toolkit pengujian Hologres Benchmark.
wget https://oss-tpch.oss-cn-hangzhou.aliyuncs.com/hologres_benchmark.tar.gz tar xvf hologres_benchmark.tar.gz -
Masuk ke direktori hologres_benchmark.
cd hologres_benchmark -
Jalankan perintah
vim group_vars/alluntuk mengonfigurasi parameter benchmark.# DB config login_host: "" login_user: "" login_password: "" login_port: "" # Benchmark run cluster: hologres cluster: "hologres" RUN_MODE: "HOTRUN" # Benchmark config scale_factor: 1 work_dir_root: /your/working_dir/benchmark/workdirs dataset_generate_root_path: /your/working_dir/benchmark/datasetsDeskripsi parameter:
Tipe
Parameter
Deskripsi
Parameter koneksi layanan Hologres
login_host
Nama domain VPC dari instans Hologres.
Login ke Konsol Manajemen, buka halaman detail instans, lalu dapatkan nama domain untuk VPC tertentu dari kolom Domain Name di bagian Network Information.
CatatanNama domain tidak termasuk port. Contoh:
hgpostcn-cn-nwy364b5v009-cn-shanghai-vpc-st.hologres.aliyuncs.comlogin_port
Port VPC dari instans Hologres.
Login ke Konsol Manajemen, buka halaman detail instans, lalu dapatkan port dari kolom Domain Name di bagian Network Information.
login_user
ID AccessKey akun Anda.
Klik Manajemen AccessKey untuk mendapatkan ID AccessKey.
login_password
Rahasia AccessKey akun Anda.
Parameter konfigurasi benchmark
scale_factor
Faktor skala dataset, yang mengontrol ukuran data yang dihasilkan. Nilai default adalah 1. Satuannya adalah GB.
work_dir_root
Direktori root direktori kerja. Direktori ini menyimpan data terkait TPC-H seperti pernyataan pembuatan tabel dan pernyataan SQL yang akan dieksekusi. Nilai default adalah
/your/working_dir/benchmark/workdirs.dataset_generate_root_path
Jalur tempat dataset pengujian yang dihasilkan disimpan. Nilai default adalah
/your/working_dir/benchmark/datasets.
-
-
Jalankan perintah berikut untuk melakukan pengujian TPC-H otomatis end-to-end.
Pengujian TPC-H otomatis end-to-end mencakup pembuatan data, pembuatan database pengujian bernama tpc_h_sf<scale_factor> (misalnya, tpc_h_sf1000), pembuatan tabel, dan impor data.
bin/run_tpch.shAnda juga dapat menjalankan perintah berikut untuk hanya melakukan pengujian kueri TPC-H.
bin/run_tpch.sh query -
Lihat hasil pengujian.
-
Ikhtisar hasil pengujian
Setelah perintah
bin/run_tpch.shdijalankan, hasil pengujian akan ditampilkan langsung. Hasilnya menyerupai output berikut.TASK [tpc_h : debug] ************************************************************************************************** skipping: [worker-1] ok: [master] => { "command_output.stdout_lines": [ "[info] 2024-06-28 14:46:09.768 | Run sql queries started.", "[info] 2024-06-28 14:46:09.947 | Run q10.sql started.", "[info] 2024-06-28 14:46:10.088 | Run q10.sql finished. Time taken: 0:00:00, 138 ms", "[info] 2024-06-28 14:46:10.239 | Run q11.sql started.", "[info] 2024-06-28 14:46:10.396 | Run q11.sql finished. Time taken: 0:00:00, 154 ms", "[info] 2024-06-28 14:46:10.505 | Run q12.sql started.", "[info] 2024-06-28 14:46:10.592 | Run q12.sql finished. Time taken: 0:00:00, 85 ms", "[info] 2024-06-28 14:46:10.703 | Run q13.sql started.", "[info] 2024-06-28 14:46:10.793 | Run q13.sql finished. Time taken: 0:00:00, 88 ms", "[info] 2024-06-28 14:46:10.883 | Run q14.sql started.", "[info] 2024-06-28 14:46:10.981 | Run q14.sql finished. Time taken: 0:00:00, 95 ms", "[info] 2024-06-28 14:46:11.132 | Run q15.sql started.", "[info] 2024-06-28 14:46:11.266 | Run q15.sql finished. Time taken: 0:00:00, 131 ms", "[info] 2024-06-28 14:46:11.441 | Run q16.sql started.", "[info] 2024-06-28 14:46:11.609 | Run q16.sql finished. Time taken: 0:00:00, 165 ms", "[info] 2024-06-28 14:46:11.728 | Run q17.sql started.", "[info] 2024-06-28 14:46:11.818 | Run q17.sql finished. Time taken: 0:00:00, 88 ms", "[info] 2024-06-28 14:46:12.017 | Run q18.sql started.", "[info] 2024-06-28 14:46:12.184 | Run q18.sql finished. Time taken: 0:00:00, 164 ms", "[info] 2024-06-28 14:46:12.287 | Run q19.sql started.", "[info] 2024-06-28 14:46:12.388 | Run q19.sql finished. Time taken: 0:00:00, 98 ms", "[info] 2024-06-28 14:46:12.503 | Run q1.sql started.", "[info] 2024-06-28 14:46:12.597 | Run q1.sql finished. Time taken: 0:00:00, 93 ms", "[info] 2024-06-28 14:46:12.732 | Run q20.sql started.", "[info] 2024-06-28 14:46:12.888 | Run q20.sql finished. Time taken: 0:00:00, 154 ms", "[info] 2024-06-28 14:46:13.184 | Run q21.sql started.", "[info] 2024-06-28 14:46:13.456 | Run q21.sql finished. Time taken: 0:00:00, 269 ms", "[info] 2024-06-28 14:46:13.558 | Run q22.sql started.", "[info] 2024-06-28 14:46:13.657 | Run q22.sql finished. Time taken: 0:00:00, 97 ms", "[info] 2024-06-28 14:46:13.796 | Run q2.sql started.", "[info] 2024-06-28 14:46:13.935 | Run q2.sql finished. Time taken: 0:00:00, 136 ms", "[info] 2024-06-28 14:46:14.051 | Run q3.sql started.", "[info] 2024-06-28 14:46:14.155 | Run q3.sql finished. Time taken: 0:00:00, 101 ms", "[info] 2024-06-28 14:46:14.255 | Run q4.sql started.", "[info] 2024-06-28 14:46:14.341 | Run q4.sql finished. Time taken: 0:00:00, 83 ms", "[info] 2024-06-28 14:46:14.567 | Run q5.sql started.", "[info] 2024-06-28 14:46:14.799 | Run q5.sql finished. Time taken: 0:00:00, 230 ms", "[info] 2024-06-28 14:46:14.881 | Run q6.sql started.", "[info] 2024-06-28 14:46:14.950 | Run q6.sql finished. Time taken: 0:00:00, 67 ms", "[info] 2024-06-28 14:46:15.138 | Run q7.sql started.", "[info] 2024-06-28 14:46:15.320 | Run q7.sql finished. Time taken: 0:00:00, 180 ms", "[info] 2024-06-28 14:46:15.572 | Run q8.sql started.", "[info] 2024-06-28 14:46:15.831 | Run q8.sql finished. Time taken: 0:00:00, 256 ms", "[info] 2024-06-28 14:46:16.081 | Run q9.sql started.", "[info] 2024-06-28 14:46:16.322 | Run q9.sql finished. Time taken: 0:00:00, 238 ms", "[info] 2024-06-28 14:46:16.325 | ----------- HOT RUN finished. Time taken: 3255 mill_sec -----------------" ] } skipping: [worker-2] skipping: [worker-3] skipping: [worker-4] TASK [tpc_h : clear Env] ********************************************************************************************** skipping: [worker-1] skipping: [worker-2] skipping: [worker-3] skipping: [worker-4] ok: [master] TASK [tpc_h : debug] ************************************************************************************************** ok: [master] => { "work_dir": "/your/working_dir/benchmark/workdirs/tpc_h/sf1" } skipping: [worker-1] skipping: [worker-2] skipping: [worker-3] skipping: [worker-4] -
Detail hasil pengujian
Setelah menjalankan perintah
bin/run_tpch.sh, sistem membangun seluruh direktori kerja pengujian TPC-H dan menampilkan jalur direktori<work_dir>. Anda dapat beralih ke jalur ini untuk melihat informasi terkait seperti pernyataan kueri, pernyataan pembuatan tabel, dan log eksekusi. Gambar berikut menunjukkan contohnya.
Jalankan perintah
cd <work_dir>/logsuntuk masuk ke direktori logs dalam direktori kerja. Anda dapat melihat hasil pengujian dan hasil detail pernyataan SQL yang dieksekusi.Struktur direktori
<work_dir>adalah sebagai berikut.working_dir/ `-- benchmark |-- datasets | `-- tpc_h | `-- sf1 | |-- worker-1 | | |-- customer.tbl | | `-- lineitem.tbl | |-- worker-2 | | |-- orders.tbl | | `-- supplier.tbl | |-- worker-3 | | |-- nation.tbl | | `-- partsupp.tbl | `-- worker-4 | |-- part.tbl | `-- region.tbl `-- workdirs `-- tpc_h `-- sf1 |-- config |-- hologres | |-- logs | | |-- q10.sql.err | | |-- q10.sql.out | | |-- q11.sql.err | | |-- q11.sql.out | | |-- q12.sql.err | | |-- q12.sql.out | | |-- q13.sql.err | | |-- q13.sql.out | | |-- q14.sql.err | | |-- q14.sql.out | | |-- q15.sql.err | | |-- q15.sql.out | | |-- q16.sql.err | | |-- q16.sql.out | | |-- q17.sql.err | | |-- q17.sql.out | | |-- q18.sql.err | | |-- q18.sql.out | | |-- q19.sql.err | | |-- q19.sql.out | | |-- q1.sql.err | | |-- q1.sql.out | | |-- q20.sql.err | | |-- q20.sql.out | | |-- q21.sql.err | | |-- q21.sql.out | | |-- q22.sql.err | | |-- q22.sql.out | | |-- q2.sql.err | | |-- q2.sql.out | | |-- q3.sql.err | | |-- q3.sql.out | | |-- q4.sql.err | | |-- q4.sql.out | | |-- q5.sql.err | | |-- q5.sql.out | | |-- q6.sql.err | | |-- q6.sql.out | | |-- q7.sql.err | | |-- q7.sql.out | | |-- q8.sql.err | | |-- q8.sql.out | | |-- q9.sql.err | | |-- q9.sql.out | | `-- run.log | `-- logs-20240628144609 | |-- q10.sql.err | |-- q10.sql.out | |-- q11.sql.err | |-- q11.sql.out | |-- q12.sql.err | |-- q12.sql.out | |-- q13.sql.err | |-- q13.sql.out | |-- q14.sql.err | |-- q14.sql.out | |-- q15.sql.err | |-- q15.sql.out | |-- q16.sql.err | |-- q16.sql.out | |-- q17.sql.err | |-- q17.sql.out | |-- q18.sql.err | |-- q18.sql.out | |-- q19.sql.err | |-- q19.sql.out | |-- q1.sql.err | |-- q1.sql.out | |-- q20.sql.err | |-- q20.sql.out | |-- q21.sql.err | |-- q21.sql.out | |-- q22.sql.err | |-- q22.sql.out | |-- q2.sql.err | |-- q2.sql.out | |-- q3.sql.err | |-- q3.sql.out | |-- q4.sql.err | |-- q4.sql.out | |-- q5.sql.err | |-- q5.sql.out | |-- q6.sql.err | |-- q6.sql.out | |-- q7.sql.err | |-- q7.sql.out | |-- q8.sql.err | |-- q8.sql.out | |-- q9.sql.err | |-- q9.sql.out | `-- run.log |-- queries | |-- ddl | | |-- hologres_analyze_tables.sql | | `-- hologres_create_tables.sql | |-- q10.sql | |-- q11.sql | |-- q12.sql | |-- q13.sql | |-- q14.sql | |-- q15.sql | |-- q16.sql | |-- q17.sql | |-- q18.sql | |-- q19.sql | |-- q1.sql | |-- q20.sql | |-- q21.sql | |-- q22.sql | |-- q2.sql | |-- q3.sql | |-- q4.sql | |-- q5.sql | |-- q6.sql | |-- q7.sql | |-- q8.sql | `-- q9.sql |-- run_hologres.sh |-- run_mysql.sh |-- run.sh `-- tpch_tools |-- dbgen |-- qgen `-- resouces |-- dists.dss `-- queries |-- 10.sql |-- 11.sql |-- 12.sql |-- 13.sql |-- 14.sql |-- 15.sql |-- 16.sql |-- 17.sql |-- 18.sql |-- 19.sql |-- 1.sql |-- 20.sql |-- 21.sql |-- 22.sql |-- 2.sql |-- 3.sql |-- 4.sql |-- 5.sql |-- 6.sql |-- 7.sql |-- 8.sql `-- 9.sql
-
Pengujian skenario kueri titik Key/Value
Untuk pengujian skenario kueri titik Key/Value, Anda dapat terus menggunakan database hologres_tpch dan tabel orders yang dibuat dalam pengujian skenario kueri OLAP. Langkah-langkahnya adalah sebagai berikut:
-
Buat tabel
Karena skenario kueri titik Key/Value menggunakan tabel berorientasi baris, Anda tidak dapat langsung menggunakan tabel orders dari pengujian skenario kueri OLAP. Anda harus membuat tabel baru. Hubungkan ke Hologres menggunakan client PSQL dan jalankan perintah berikut untuk membuat tabel orders_row.
CatatanUntuk informasi lebih lanjut tentang cara menghubungkan ke Hologres menggunakan client PSQL, lihat Hubungkan ke Hologres untuk pengembangan.
DROP TABLE IF EXISTS public.orders_row; BEGIN; CREATE TABLE public.orders_row( O_ORDERKEY BIGINT NOT NULL PRIMARY KEY ,O_CUSTKEY INT NOT NULL ,O_ORDERSTATUS TEXT NOT NULL ,O_TOTALPRICE DECIMAL(15,2) NOT NULL ,O_ORDERDATE TIMESTAMPTZ NOT NULL ,O_ORDERPRIORITY TEXT NOT NULL ,O_CLERK TEXT NOT NULL ,O_SHIPPRIORITY INT NOT NULL ,O_COMMENT TEXT NOT NULL ); CALL SET_TABLE_PROPERTY('public.orders_row', 'orientation', 'row'); CALL SET_TABLE_PROPERTY('public.orders_row', 'clustering_key', 'o_orderkey'); CALL SET_TABLE_PROPERTY('public.orders_row', 'distribution_key', 'o_orderkey'); COMMIT; -
Impor data
Gunakan pernyataan INSERT INTO berikut untuk mengimpor data dari tabel orders dalam dataset TPC-H ke tabel orders_row.
CatatanHologres V2.1.17 dan versi yang lebih baru mendukung Serverless Computing. Untuk skenario seperti impor data offline berskala besar, pekerjaan extract, transform, and load (ETL) besar, dan kueri volume besar pada tabel eksternal, Anda dapat menggunakan Serverless Computing untuk mengeksekusi tugas-tugas tersebut. Fitur ini menggunakan sumber daya serverless tambahan alih-alih sumber daya instans Anda sendiri. Anda tidak perlu menyediakan sumber daya komputasi tambahan untuk instans Anda. Hal ini secara signifikan meningkatkan stabilitas instans, mengurangi kemungkinan error kehabisan memori (OOM), dan Anda hanya dikenai biaya untuk tugas individual tersebut. Untuk informasi lebih lanjut tentang Serverless Computing, lihat Serverless Computing. Untuk informasi tentang cara menggunakan Serverless Computing, lihat Panduan penggunaan Serverless Computing.
-- (Opsional) Gunakan Serverless Computing untuk melakukan impor data offline berskala besar dan pekerjaan ETL. SET hg_computing_resource = 'serverless'; INSERT INTO public.orders_row SELECT * FROM public.orders; -- Atur ulang konfigurasi untuk memastikan bahwa pernyataan SQL yang tidak perlu tidak menggunakan sumber daya serverless. RESET hg_computing_resource; -
Jalankan kueri
-
Hasilkan pernyataan kueri.
Skenario kueri titik Key/Value memiliki dua jenis kueri utama. Pernyataan kuerinya adalah sebagai berikut:
Metode kueri
Pernyataan kueri
Deskripsi
Filter nilai tunggal
SELECT column_a ,column_b ,... ,column_x FROM table_x WHERE pk = value_x ;Pernyataan kueri ini digunakan untuk filtering nilai tunggal, di mana klausa
WHEREpada pernyataan SQL memiliki nilai unik.Filter multi-nilai
SELECT column_a ,column_b ,... ,column_x FROM table_x WHERE pk IN ( value_a, value_b,..., value_x ) ;Pernyataan kueri ini digunakan untuk filtering multi-nilai, di mana klausa
WHEREpada pernyataan SQL dapat memiliki beberapa nilai.Gunakan skrip berikut untuk menghasilkan pernyataan SQL yang diperlukan.
rm -rf kv_query mkdir kv_query cd kv_query echo " \set column_values random(1,99999999) select O_ORDERKEY,O_CUSTKEY,O_ORDERSTATUS,O_TOTALPRICE,O_ORDERDATE,O_ORDERPRIORITY,O_CLERK,O_SHIPPRIORITY,O_COMMENT from public.orders_row WHERE o_orderkey =:column_values; " >> kv_query_single.sql echo " \set column_values1 random(1,99999999) \set column_values2 random(1,99999999) \set column_values3 random(1,99999999) \set column_values4 random(1,99999999) \set column_values5 random(1,99999999) \set column_values6 random(1,99999999) \set column_values7 random(1,99999999) \set column_values8 random(1,99999999) \set column_values9 random(1,99999999) select O_ORDERKEY,O_CUSTKEY,O_ORDERSTATUS,O_TOTALPRICE,O_ORDERDATE,O_ORDERPRIORITY,O_CLERK,O_SHIPPRIORITY,O_COMMENT from public.orders_row WHERE o_orderkey in(:column_values1,:column_values2,:column_values3,:column_values4,:column_values5,:column_values6,:column_values7,:column_values8,:column_values9); " >> kv_query_in.sqlSetelah skrip dijalankan, dua file SQL dihasilkan:
-
kv_query_single.sql: SQL untuk filtering nilai tunggal. -
kv_query_in.sql: SQL untuk filtering multi-nilai. Skrip ini menghasilkan secara acak pernyataan SQL yang memfilter 10 nilai.
-
-
Untuk memudahkan pengumpulan statistik kueri, gunakan tool pgbench. Jalankan perintah berikut untuk menginstal tool pgbench.
yum install postgresql-contrib -yUntuk menghindari masalah pengujian akibat ketidakcocokan tool, instal pgbench versi 13 atau yang lebih baru. Jika Anda sudah menginstal tool pgbench, pastikan versinya 9.6 atau yang lebih baru. Jalankan perintah berikut untuk memeriksa versi tool saat ini.
pgbench --version -
Jalankan pernyataan pengujian.
CatatanJalankan perintah berikut di direktori tempat pernyataan kueri dihasilkan.
-
Untuk skenario filtering nilai tunggal, gunakan tool pgbench untuk uji stres.
PGUSER=<AccessKey_ID> PGPASSWORD=<AccessKey_Secret> PGDATABASE=<Database> pgbench -h <Endpoint> -p <Port> -c <Client_Num> -T <Query_Seconds> -M prepared -n -f kv_query_single.sql -
Untuk skenario filtering multi-nilai, gunakan tool pgbench untuk uji stres.
PGUSER=<AccessKey_ID> PGPASSWORD=<AccessKey_Secret> PGDATABASE=<Database> pgbench -h <Endpoint> -p <Port> -c <Client_Num> -T <Query_Seconds> -M prepared -n -f kv_query_in.sql
Tabel berikut menjelaskan parameter-parameternya.
Parameter
Deskripsi
AccessKey_ID
ID AccessKey Akun Alibaba Cloud Anda.
Klik Manajemen AccessKey untuk mendapatkan ID AccessKey.
AccessKey_Secret
Rahasia AccessKey Akun Alibaba Cloud Anda.
Klik Manajemen AccessKey untuk mendapatkan Rahasia AccessKey.
Database
-
Nama database Hologres.
-
Setelah Anda mengaktifkan instans Hologres, sistem secara otomatis membuat database postgres.
-
Anda dapat menggunakan database postgres untuk menghubungkan ke Hologres. Namun, database ini dialokasikan sedikit sumber daya. Untuk pengembangan bisnis, buat database baru. Untuk informasi lebih lanjut, lihat Buat database.
Endpoint
Alamat jaringan (Endpoint) instans Hologres.
Buka halaman detail instans di Konsol Hologres dan dapatkan alamat jaringan dari bagian Network Information.
Port
Port jaringan instans Hologres.
Buka halaman Instance Details di Konsol Hologres untuk mendapatkan port jaringan.
Client_Num
Jumlah klien, yaitu konkurensi.
Contohnya, pengujian ini hanya mengevaluasi performa kueri, bukan konkurensi. Atur konkurensi ke 1.
Query_Seconds
Durasi uji stres (dalam detik) untuk setiap kueri yang dijalankan oleh setiap klien. Misalnya, parameter ini diatur ke 300 dalam topik ini.
-
-
Skenario pembaruan data
Skenario ini menguji performa pembaruan data engine OLAP ketika primary key tersedia dan performa pembaruan seluruh baris ketika terjadi konflik primary key.
-
Hasilkan kueri
echo " \set O_ORDERKEY random(1,99999999) INSERT INTO public.orders_row(o_orderkey,o_custkey,o_orderstatus,o_totalprice,o_orderdate,o_orderpriority,o_clerk,o_shippriority,o_comment) VALUES (:O_ORDERKEY,1,'demo',1.1,'2021-01-01','demo','demo',1,'demo') on conflict(o_orderkey) do update set (o_orderkey,o_custkey,o_orderstatus,o_totalprice,o_orderdate,o_orderpriority,o_clerk,o_shippriority,o_comment)= ROW(excluded.*); " > /root/insert_on_conflict.sql -
Insert dan update. Untuk informasi lebih lanjut tentang parameter, lihat Deskripsi parameter.
PGUSER=<AccessKey_ID> PGPASSWORD=<AccessKey_Secret> PGDATABASE=<Database> pgbench -h <Endpoint> -p <Port> -c <Client_Num> -T <Query_Seconds> -M prepared -n -f /root/insert_on_conflict.sql -
Contoh hasil
transaction type: Custom query scaling factor: 1 query mode: prepared number of clients: 249 number of threads: 1 duration: 60 s number of transactions actually processed: 1923038 tps = 32005.850214 (including connections establishing) tps = 36403.145722 (excluding connections establishing)
Skenario penulisan real-time Flink
Skenario ini menguji kemampuan penulisan data real-time.
-
Hologres DDL
Dalam skenario ini, tabel Hologres memiliki 10 kolom, dan kolom
keyadalah primary key. Hologres DDL-nya adalah sebagai berikut.DROP TABLE IF EXISTS flink_insert; BEGIN ; CREATE TABLE IF NOT EXISTS flink_insert( key INT PRIMARY KEY ,value1 TEXT ,value2 TEXT ,value3 TEXT ,value4 TEXT ,value5 TEXT ,value6 TEXT ,value7 TEXT ,value8 TEXT ,value9 TEXT ); CALL SET_TABLE_PROPERTY('flink_insert', 'orientation', 'row'); CALL SET_TABLE_PROPERTY('flink_insert', 'clustering_key', 'key'); CALL SET_TABLE_PROPERTY('flink_insert', 'distribution_key', 'key'); COMMIT; -
Skrip pekerjaan Flink
Gunakan penghasil angka acak yang tersedia di Flink yang sepenuhnya dikelola untuk menulis data ke Hologres. Jika terjadi konflik primary key, seluruh baris diperbarui. Volume data satu baris melebihi 512 B. Skrip pekerjaan Flink adalah sebagai berikut.
CREATE TEMPORARY TABLE flink_case_1_source ( key INT, value1 VARCHAR, value2 VARCHAR, value3 VARCHAR, value4 VARCHAR, value5 VARCHAR, value6 VARCHAR, value7 VARCHAR, value8 VARCHAR, value9 VARCHAR ) WITH ( 'connector' = 'datagen', -- optional options -- 'rows-per-second' = '1000000000', 'fields.key.min'='1', 'fields.key.max'='2147483647', 'fields.value1.length' = '57', 'fields.value2.length' = '57', 'fields.value3.length' = '57', 'fields.value4.length' = '57', 'fields.value5.length' = '57', 'fields.value6.length' = '57', 'fields.value7.length' = '57', 'fields.value8.length' = '57', 'fields.value9.length' = '57' ); -- Create a Hologres sink table. CREATE TEMPORARY TABLE flink_case_1_sink ( key INT, value1 VARCHAR, value2 VARCHAR, value3 VARCHAR, value4 VARCHAR, value5 VARCHAR, value6 VARCHAR, value7 VARCHAR, value8 VARCHAR, value9 VARCHAR ) WITH ( 'connector' = 'hologres', 'dbname'='<yourDbname>', --The name of the Hologres database. 'tablename'='<yourTablename>', --The name of the Hologres table that receives data. 'username'='<yourUsername>', --The AccessKey ID of your Alibaba Cloud account. 'password'='<yourPassword>', --The AccessKey secret of your Alibaba Cloud account. 'endpoint'='<yourEndpoint>', --The VPC endpoint of the Hologres instance. 'connectionSize' = '10', --The default value is 3. 'jdbcWriteBatchSize' = '1024', --The default value is 256. 'jdbcWriteBatchByteSize' = '2147483647', --The default value is 20971520. 'mutatetype'='insertorreplace' --Inserts data or replaces an entire existing row. ); -- Perform ETL operations and write data. insert into flink_case_1_sink select key, value1, value2, value3, value4, value5, value6, value7, value8, value9 from flink_case_1_source ;Untuk deskripsi parameter, lihat Tabel sink Hologres.
-
Contoh hasil
Di halaman Monitoring Information di konsol Hologres, Anda dapat melihat nilai RPS.

22 Pernyataan kueri TPC-H
22 pernyataan kueri TPC-H adalah sebagai berikut. Anda dapat mengklik tautan dalam tabel untuk melihatnya.
|
Nama |
Pernyataan kueri |
|||
|
22 pernyataan kueri TPC-H |
||||
|
- |
- |
|||
-
Q1
select l_returnflag, l_linestatus, sum(l_quantity) as sum_qty, sum(l_extendedprice) as sum_base_price, sum(l_extendedprice * (1 - l_discount)) as sum_disc_price, sum(l_extendedprice * (1 - l_discount) * (1 + l_tax)) as sum_charge, avg(l_quantity) as avg_qty, avg(l_extendedprice) as avg_price, avg(l_discount) as avg_disc, count(*) as count_order from lineitem where l_shipdate <= date '1998-12-01' - interval '120' day group by l_returnflag, l_linestatus order by l_returnflag, l_linestatus; -
Q2
select s_acctbal, s_name, n_name, p_partkey, p_mfgr, s_address, s_phone, s_comment from part, supplier, partsupp, nation, region where p_partkey = ps_partkey and s_suppkey = ps_suppkey and p_size = 48 and p_type like '%STEEL' and s_nationkey = n_nationkey and n_regionkey = r_regionkey and r_name = 'EUROPE' and ps_supplycost = ( select min(ps_supplycost) from partsupp, supplier, nation, region where p_partkey = ps_partkey and s_suppkey = ps_suppkey and s_nationkey = n_nationkey and n_regionkey = r_regionkey and r_name = 'EUROPE' ) order by s_acctbal desc, n_name, s_name, p_partkey limit 100; -
Q3
select l_orderkey, sum(l_extendedprice * (1 - l_discount)) as revenue, o_orderdate, o_shippriority from customer, orders, lineitem where c_mktsegment = 'MACHINERY' and c_custkey = o_custkey and l_orderkey = o_orderkey and o_orderdate < date '1995-03-23' and l_shipdate > date '1995-03-23' group by l_orderkey, o_orderdate, o_shippriority order by revenue desc, o_orderdate limit 10; -
Q4
select o_orderpriority, count(*) as order_count from orders where o_orderdate >= date '1996-07-01' and o_orderdate < date '1996-07-01' + interval '3' month and exists ( select * from lineitem where l_orderkey = o_orderkey and l_commitdate < l_receiptdate ) group by o_orderpriority order by o_orderpriority; -
Q5
select n_name, sum(l_extendedprice * (1 - l_discount)) as revenue from customer, orders, lineitem, supplier, nation, region where c_custkey = o_custkey and l_orderkey = o_orderkey and l_suppkey = s_suppkey and c_nationkey = s_nationkey and s_nationkey = n_nationkey and n_regionkey = r_regionkey and r_name = 'EUROPE' and o_orderdate >= date '1996-01-01' and o_orderdate < date '1996-01-01' + interval '1' year group by n_name order by revenue desc; -
Q6
select sum(l_extendedprice * l_discount) as revenue from lineitem where l_shipdate >= date '1996-01-01' and l_shipdate < date '1996-01-01' + interval '1' year and l_discount between 0.02 - 0.01 and 0.02 + 0.01 and l_quantity < 24; -
Q7
select supp_nation, cust_nation, l_year, sum(volume) as revenue from ( select n1.n_name as supp_nation, n2.n_name as cust_nation, extract(year from l_shipdate) as l_year, l_extendedprice * (1 - l_discount) as volume from supplier, lineitem, orders, customer, nation n1, nation n2 where s_suppkey = l_suppkey and o_orderkey = l_orderkey and c_custkey = o_custkey and s_nationkey = n1.n_nationkey and c_nationkey = n2.n_nationkey and ( (n1.n_name = 'CANADA' and n2.n_name = 'BRAZIL') or (n1.n_name = 'BRAZIL' and n2.n_name = 'CANADA') ) and l_shipdate between date '1995-01-01' and date '1996-12-31' ) as shipping group by supp_nation, cust_nation, l_year order by supp_nation, cust_nation, l_year; -
Q8
select o_year, sum(case when nation = 'BRAZIL' then volume else 0 end) / sum(volume) as mkt_share from ( select extract(year from o_orderdate) as o_year, l_extendedprice * (1 - l_discount) as volume, n2.n_name as nation from part, supplier, lineitem, orders, customer, nation n1, nation n2, region where p_partkey = l_partkey and s_suppkey = l_suppkey and l_orderkey = o_orderkey and o_custkey = c_custkey and c_nationkey = n1.n_nationkey and n1.n_regionkey = r_regionkey and r_name = 'AMERICA' and s_nationkey = n2.n_nationkey and o_orderdate between date '1995-01-01' and date '1996-12-31' and p_type = 'LARGE ANODIZED COPPER' ) as all_nations group by o_year order by o_year; -
Q9
select nation, o_year, sum(amount) as sum_profit from ( select n_name as nation, extract(year from o_orderdate) as o_year, l_extendedprice * (1 - l_discount) - ps_supplycost * l_quantity as amount from part, supplier, lineitem, partsupp, orders, nation where s_suppkey = l_suppkey and ps_suppkey = l_suppkey and ps_partkey = l_partkey and p_partkey = l_partkey and o_orderkey = l_orderkey and s_nationkey = n_nationkey and p_name like '%maroon%' ) as profit group by nation, o_year order by nation, o_year desc; -
Q10
select c_custkey, c_name, sum(l_extendedprice * (1 - l_discount)) as revenue, c_acctbal, n_name, c_address, c_phone, c_comment from customer, orders, lineitem, nation where c_custkey = o_custkey and l_orderkey = o_orderkey and o_orderdate >= date '1993-02-01' and o_orderdate < date '1993-02-01' + interval '3' month and l_returnflag = 'R' and c_nationkey = n_nationkey group by c_custkey, c_name, c_acctbal, c_phone, n_name, c_address, c_comment order by revenue desc limit 20; -
Q11
select ps_partkey, sum(ps_supplycost * ps_availqty) as value from partsupp, supplier, nation where ps_suppkey = s_suppkey and s_nationkey = n_nationkey and n_name = 'EGYPT' group by ps_partkey having sum(ps_supplycost * ps_availqty) > ( select sum(ps_supplycost * ps_availqty) * 0.0001000000 from partsupp, supplier, nation where ps_suppkey = s_suppkey and s_nationkey = n_nationkey and n_name = 'EGYPT' ) order by value desc; -
Q12
select l_shipmode, sum(case when o_orderpriority = '1-URGENT' or o_orderpriority = '2-HIGH' then 1 else 0 end) as high_line_count, sum(case when o_orderpriority <> '1-URGENT' and o_orderpriority <> '2-HIGH' then 1 else 0 end) as low_line_count from orders, lineitem where o_orderkey = l_orderkey and l_shipmode in ('FOB', 'AIR') and l_commitdate < l_receiptdate and l_shipdate < l_commitdate and l_receiptdate >= date '1997-01-01' and l_receiptdate < date '1997-01-01' + interval '1' year group by l_shipmode order by l_shipmode; -
Q13
select c_count, count(*) as custdist from ( select c_custkey, count(o_orderkey) as c_count from customer left outer join orders on c_custkey = o_custkey and o_comment not like '%special%deposits%' group by c_custkey ) c_orders group by c_count order by custdist desc, c_count desc; -
Q14
select 100.00 * sum(case when p_type like 'PROMO%' then l_extendedprice * (1 - l_discount) else 0 end) / sum(l_extendedprice * (1 - l_discount)) as promo_revenue from lineitem, part where l_partkey = p_partkey and l_shipdate >= date '1997-06-01' and l_shipdate < date '1997-06-01' + interval '1' month; -
Q15
with revenue0(SUPPLIER_NO, TOTAL_REVENUE) as ( select l_suppkey, sum(l_extendedprice * (1 - l_discount)) from lineitem where l_shipdate >= date '1995-02-01' and l_shipdate < date '1995-02-01' + interval '3' month group by l_suppkey ) select s_suppkey, s_name, s_address, s_phone, total_revenue from supplier, revenue0 where s_suppkey = supplier_no and total_revenue = ( select max(total_revenue) from revenue0 ) order by s_suppkey; -
Q16
select p_brand, p_type, p_size, count(distinct ps_suppkey) as supplier_cnt from partsupp, part where p_partkey = ps_partkey and p_brand <> 'Brand#45' and p_type not like 'SMALL ANODIZED%' and p_size in (47, 15, 37, 30, 46, 16, 18, 6) and ps_suppkey not in ( select s_suppkey from supplier where s_comment like '%Customer%Complaints%' ) group by p_brand, p_type, p_size order by supplier_cnt desc, p_brand, p_type, p_size; -
Q17
select sum(l_extendedprice) / 7.0 as avg_yearly from lineitem, part where p_partkey = l_partkey and p_brand = 'Brand#51' and p_container = 'WRAP PACK' and l_quantity < ( select 0.2 * avg(l_quantity) from lineitem where l_partkey = p_partkey ); -
Q18
select c_name, c_custkey, o_orderkey, o_orderdate, o_totalprice, sum(l_quantity) from customer, orders, lineitem where o_orderkey in ( select l_orderkey from lineitem group by l_orderkey having sum(l_quantity) > 312 ) and c_custkey = o_custkey and o_orderkey = l_orderkey group by c_name, c_custkey, o_orderkey, o_orderdate, o_totalprice order by o_totalprice desc, o_orderdate limit 100; -
Q19
select sum(l_extendedprice* (1 - l_discount)) as revenue from lineitem, part where ( p_partkey = l_partkey and p_brand = 'Brand#52' and p_container in ('SM CASE', 'SM BOX', 'SM PACK', 'SM PKG') and l_quantity >= 3 and l_quantity <= 3 + 10 and p_size between 1 and 5 and l_shipmode in ('AIR', 'AIR REG') and l_shipinstruct = 'DELIVER IN PERSON' ) or ( p_partkey = l_partkey and p_brand = 'Brand#43' and p_container in ('MED BAG', 'MED BOX', 'MED PKG', 'MED PACK') and l_quantity >= 12 and l_quantity <= 12 + 10 and p_size between 1 and 10 and l_shipmode in ('AIR', 'AIR REG') and l_shipinstruct = 'DELIVER IN PERSON' ) or ( p_partkey = l_partkey and p_brand = 'Brand#52' and p_container in ('LG CASE', 'LG BOX', 'LG PACK', 'LG PKG') and l_quantity >= 21 and l_quantity <= 21 + 10 and p_size between 1 and 15 and l_shipmode in ('AIR', 'AIR REG') and l_shipinstruct = 'DELIVER IN PERSON' ); -
Q20
select s_name, s_address from supplier, nation where s_suppkey in ( select ps_suppkey from partsupp where ps_partkey in ( select p_partkey from part where p_name like 'drab%' ) and ps_availqty > ( select 0.5 * sum(l_quantity) from lineitem where l_partkey = ps_partkey and l_suppkey = ps_suppkey and l_shipdate >= date '1996-01-01' and l_shipdate < date '1996-01-01' + interval '1' year ) ) and s_nationkey = n_nationkey and n_name = 'KENYA' order by s_name; -
Q21
select s_name, count(*) as numwait from supplier, lineitem l1, orders, nation where s_suppkey = l1.l_suppkey and o_orderkey = l1.l_orderkey and o_orderstatus = 'F' and l1.l_receiptdate > l1.l_commitdate and exists ( select * from lineitem l2 where l2.l_orderkey = l1.l_orderkey and l2.l_suppkey <> l1.l_suppkey ) and not exists ( select * from lineitem l3 where l3.l_orderkey = l1.l_orderkey and l3.l_suppkey <> l1.l_suppkey and l3.l_receiptdate > l3.l_commitdate ) and s_nationkey = n_nationkey and n_name = 'PERU' group by s_name order by numwait desc, s_name limit 100; -
Q22
select cntrycode, count(*) as numcust, sum(c_acctbal) as totacctbal from ( select substring(c_phone from 1 for 2) as cntrycode, c_acctbal from customer where substring(c_phone from 1 for 2) in ('24', '32', '17', '18', '12', '14', '22') and c_acctbal > ( select avg(c_acctbal) from customer where c_acctbal > 0.00 and substring(c_phone from 1 for 2) in ('24', '32', '17', '18', '12', '14', '22') ) and not exists ( select * from orders where o_custkey = c_custkey ) ) as custsale group by cntrycode order by cntrycode;