This report details the performance of the PolarDB-X fully encrypted database under various encryption scenarios. The results can help you evaluate and choose the right encryption strategy for your needs.
Test conclusion summary
Test dimension | Key finding |
Under different instance types and concurrent pressures, the performance overhead of enabling 100% column encryption (encrypting all columns in a table) is stable at about 10%. | |
Even without encryption, replacing the standard Java Database Connectivity (JDBC) driver with the EncJDBC driver increases client CPU consumption by 30% to 50%. | |
After you enable 100% column encryption (encrypting all columns in a table), client CPU consumption increases by about 55% to 150% compared to the plaintext scenario to achieve optimal performance. | |
Performance overhead is positively correlated with the number of encrypted columns. The more columns you encrypt, the greater the performance overhead and the higher the client CPU consumption. Encrypt only sensitive columns as needed. |
Test design
Test model and metrics
Test model: We use Oltpbench, an industry-standard OLTP benchmark tool, with its TPC-C model.
Data volume: The test is based on 1000 TPC-C warehouses, with 300 million rows in the core
bmsql_order_linetable and 100 million rows inbmsql_stock.Performance metric: Transactions per second (TPS).
The results in this report are based on a TPC-C implementation and are not comparable to officially published TPC-C benchmark results, as this test does not meet all TPC-C benchmark requirements.
Test environment
The test client (ECS instance) and the PolarDB-X instance are deployed in the same VPC and region to minimize network latency.
Configuration | ECS instance (Client) | PolarDB-X instance (Database) |
Region | Beijing Zone I/H | Beijing Zone I/H |
Specifications | 2 × ecs.c7.4xlarge (16 cores, 32 GB) | 2 × 4 cores, 32 GB |
4 × ecs.c7.4xlarge (16 cores, 32 GB) | 4 × 8 cores, 64 GB | |
Image/Version | Alibaba Cloud Linux 3.2104 LTS 64-bit | polardb-2.4.0_5.4.19-20240927_xcluster5.4.19-20241010 |
Test scenarios
To simulate varying levels of data sensitivity, the tests cover different percentages of encrypted columns:
20% column encryption: Encrypts key identifier fields, such as IDs and order numbers.
50% column encryption: In addition to the 20% scenario, encrypts business information, such as prices, dates, and quantities.
100% column encryption: Encrypts all fields in all tables.
Detailed test results
Overall performance (100% column encryption vs. plaintext)
With all columns encrypted, the performance (TPS) degradation is consistently around 10% compared to plaintext.
Test results
2 × (4-core, 32 GB)
Concurrency
Plaintext TPS
100% encrypted TPS
Performance overhead
64
80,223
72,248
10%
128
99,019
88,469
11%
256
105,309
94,756
10%
512
104,313
95,962
8%
1024
98,990
95,182
4%
4 × (8-core, 64 GB)
Concurrency
Plaintext TPS
100% encrypted TPS
Performance overhead
64
108,581
96,828
11%
128
184,293
167,380
9%
256
263,538
239,913
9%
512
292,481
252,741
13%
1024
284,561
252,432
11%
Overhead of the EncJDBC driver (plaintext test)
Using the EncJDBC driver, even without encryption, introduces a CPU overhead on the client side.
Test conditions
The client instance type (ECS instance) is ecs.c7.4xlarge (16-core, 32 GB), and the concurrency is 1024.
Test results
Specifications | CPU utilization | Memory usage (MB) | ||||
Standard JDBC | EncJDBC | Standard JDBC | EncJDBC | |||
2 × 4 cores, 32 GB | 21.37% | 28.00% | +31% | 1077 | 1001 | -7.06% |
4 × 8 cores, 64 GB | 47.09% | 69.90% | +48% | 1048 | 1024 | -2.29% |
Client resource consumption after enabling encryption
Enabling 100% column encryption significantly increases client-side CPU usage. Ensure your application has sufficient CPU resources to handle this overhead.
Test conditions
The concurrency is 1024.
Test results
Specifications | CPU utilization (before encryption) | CPU utilization (100% encryption) | CPU increase |
2 × (4-core, 32 GB) | 13.01% | 33.45% | +157% |
4 × (4-core, 32 GB) | 20.60% | 32.07% | +56% |
2 × (8-core, 64 GB) | 19.93% | 36.77% | +85% |
4 × (8-core, 64 GB) | 17.73% | 43.59% | +146% |
Conclusion
Enabling full encryption increases client-side CPU consumption by 56% to 157%, with a maximum increase of more than 1.5 times.
Impact of different encryption ratios on performance and resources
The number of encrypted columns directly impacts performance and client CPU consumption. The following tables show how performance overhead and client CPU usage change as the percentage of encrypted columns increases.
Test conditions
The concurrency is 1024.
Test results
2 × (4-core, 32 GB)
Encryption ratio
Client CPU utilization
TPS
Performance overhead
20%
20.01%
96,792
2%
50%
22.23%
96,639
2%
100%
33.45%
95,182
4%
4 × (8-core, 64 GB)
Encryption ratio
Client CPU utilization
TPS
Performance overhead
20%
22.50%
276,640
3%
50%
28.80%
276,640
8%
100%
43.59%
276,640
11%
Conclusion
To achieve the best balance between security and performance, we recommend encrypting only the necessary sensitive data columns.
Appendix: Test steps
Step 1: Prepare the ECS client
Prepare an ECS instance. This ECS instance is used for subsequent operations, such as importing data and running tests.
Recommended architectures and operating systems for the ECS instance:
x86:
CentOS 7.9 or later.
Alibaba Cloud Linux 3.
Ubuntu 18.0 or later.
ARM: CentOS 8.0 or later.
Deploy the ECS instance used for testing in a VPC. Take note of the name and ID of this VPC because all subsequent instances will be deployed in it.
Expose the public IP address of the ECS instance to operate the testing tool.
Step 2: Prepare the PolarDB-X instance
- Note
When you create the PolarDB-X instance, deploy it in the same VPC as the ECS instance.
Add the internal IP address of the ECS instance to the whitelist of the PolarDB-X instance.
In the instance, create a database for. This example uses
tpcc_1000.CREATE DATABASE tpcc_1000 MODE = 'auto';
Step 3: Prepare the benchmark data
Prepare the testing tool
On the ECS instance, download and decompress the testing tool package benchmarksql.tar.gz.
tar xzvf benchmarksql.tar.gzDownload the always-confidential driver package aliyun-encdb-mysql-jdbc-1.0.9-2-20240910.094626-1.jar and move it to the
benchmarksql/libfolder.
Configure testing parameters
Go to the
benchmarksql/runfolder and edit theprops.mysqlconfiguration file.cd benchmarksql/run vi props.mysqlModify the configuration to match your environment. The following is a configuration example and a description of key parameters:
db=mysql driver=com.aliyun.encdb.mysql.jdbc.EncDriver conn=jdbc:mysql:encdb://{HOST}:{PORT}/tpcc_1000_enc?/MEK={MEK}&ENC_ALGO={ENC_ALGO}&useSSL=false&useServerPrepStmts=false&useConfigs=maxPerformance&rewriteBatchedStatements=true user={USER} password={PASSWORD} warehouses=1000 loadWorkers=100 terminals=128 //To run specified transactions per terminal- runMins must equal zero runTxnsPerTerminal=0 //To run for specified minutes- runTxnsPerTerminal must equal zero runMins=5 //Number of total transactions per minute limitTxnsPerMin=0 //Set to true to run in 4.x compatible mode. Set to false to use the //entire configured database evenly. terminalWarehouseFixed=true //The following five values must add up to 100 //The default percentages of 45, 43, 4, 4 & 4 match the TPC-C spec newOrderWeight=45 paymentWeight=43 orderStatusWeight=4 deliveryWeight=4 stockLevelWeight=4 // Directory name to create for collecting detailed result data. // Comment this out to suppress. resultDirectory=my_result_%tY-%tm-%td_%tH%tM%tS // osCollectorScript=./misc/os_collector_linux.py // osCollectorInterval=1 // osCollectorSSHAddr=user@dbhost // osCollectorDevices=net_eth0 blk_sdaParameter description
conn: The database connection string. Replace{HOST}and{PORT}.{MEK}: The master encryption key (MEK), a 32-character hexadecimal string.{ENC_ALGO}: The symmetric encryption algorithm, such asSM4_128_GCM.user: The database username.password: The database password.warehouses: The number of TPC-C warehouses, which determines the total data volume.loadWorkers: The number of concurrent threads for data import.terminals: The number of concurrent threads for testing.runMins: The duration of the test in minutes.
Import dataIn the
benchmarksql/runfolder, execute the following command to start importing data.cd benchmarksql/run/sql.common cp tableCreates.sql.auto tableCreates.sql cd .. nohup ./runDatabaseBuild.sh props.mysql &Authenticate data integrityAfter the data is imported, connect to the PolarDB-X instance from the command line and execute the following SQL statements to verify data integrity. If all queries return an empty result set, the data was imported successfully and is complete.
SELECT a.* FROM (SELECT w_id, w_ytd FROM bmsql_warehouse) a LEFT JOIN (SELECT d_w_id, sum(d_ytd) AS d_ytd_sum FROM bmsql_district GROUP BY d_w_id) b ON a.w_id = b.d_w_id AND a.w_ytd = b.d_ytd_sum WHERE b.d_w_id IS NULL; SELECT a.* FROM (SELECT d_w_id, d_id, D_NEXT_O_ID - 1 AS d_n_o_id FROM bmsql_district) a LEFT JOIN (SELECT o_w_id, o_d_id, max(o_id) AS o_id_max FROM bmsql_oorder GROUP BY o_w_id, o_d_id) b ON a.d_w_id = b.o_w_id AND a.d_id = b.o_d_id AND a.d_n_o_id = b.o_id_max WHERE b.o_w_id IS NULL; SELECT a.* FROM (SELECT d_w_id, d_id, D_NEXT_O_ID - 1 AS d_n_o_id FROM bmsql_district) a LEFT JOIN (SELECT no_w_id, no_d_id, max(no_o_id) AS no_id_max FROM bmsql_new_order GROUP BY no_w_id, no_d_id) b ON a.d_w_id = b.no_w_id AND a.d_id = b.no_d_id AND a.d_n_o_id = b.no_id_max WHERE b.no_id_max IS NULL; SELECT * FROM (SELECT (count(no_o_id)-(max(no_o_id)-min(no_o_id)+1)) AS diff FROM bmsql_new_order GROUP BY no_w_id, no_d_id) a WHERE diff != 0; SELECT a.* FROM (SELECT o_w_id, o_d_id, sum(o_ol_cnt) AS o_ol_cnt_cnt FROM bmsql_oorder GROUP BY o_w_id, o_d_id) a LEFT JOIN (SELECT ol_w_id, ol_d_id, count(ol_o_id) AS ol_o_id_cnt FROM bmsql_order_line GROUP BY ol_w_id, ol_d_id) b ON a.o_w_id = b.ol_w_id AND a.o_d_id = b.ol_d_id AND a.o_ol_cnt_cnt = b.ol_o_id_cnt WHERE b.ol_w_id IS NULL; SELECT a.* FROM (SELECT d_w_id, sum(d_ytd) AS d_ytd_sum FROM bmsql_district GROUP BY d_w_id) a LEFT JOIN (SELECT w_id, w_ytd FROM bmsql_warehouse) b ON a.d_w_id = b.w_id AND a.d_ytd_sum = b.w_ytd WHERE b.w_id IS NULL;
Step 4: Execute the test
In the benchmarksql/run folder, execute the following command to run the TPC-C test. The terminal displays the live transactions per minute (tpmC) and outputs the final average value after the test is complete.
cd benchmarksql/run
./runBenchmark.sh props.mysql