All Products
Search
Document Center

PolarDB:Performance test report

Last Updated:Mar 28, 2026

This report quantifies the performance impact of enabling column encryption in PolarDB-X always-confidential databases across different instance sizes, concurrency levels, and encryption ratios. Use the results to choose an encryption strategy that fits your throughput and resource requirements.

The results in this report are based on a TPC-C implementation and are not comparable to officially published TPC-C benchmark results. This test does not meet all TPC-C benchmark requirements.

Key findings

Test dimensionFinding
Overall performance overhead100% column encryption reduces transactions per second (TPS) by approximately 10% across different instance sizes and concurrency levels.
EncJDBC driver overheadSwitching from the standard Java Database Connectivity (JDBC) driver to EncJDBC increases client CPU usage by 30%–50%, even without encryption enabled.
Client CPU consumptionEnabling 100% column encryption increases client CPU usage by 56%–157% compared to plaintext.
Encryption ratio impactPerformance overhead and CPU usage increase as more columns are encrypted. Encrypt only sensitive columns to minimize overhead.

Test design

Test model and metrics

  • Benchmark tool: OLTPBench, using the TPC-C workload model

  • Data volume: 1,000 TPC-C warehouses; 300 million rows in the bmsql_order_line table, 100 million rows in bmsql_stock

  • Performance metric: Transactions per second (TPS)

Test environment

The test client (Elastic Compute Service (ECS) instance) and the PolarDB-X instance run in the same virtual private cloud (VPC) and region to minimize network latency.

ConfigurationECS instance (client)PolarDB-X instance (database)
RegionBeijing Zone I/HBeijing Zone I/H
Specifications2 × ecs.c7.4xlarge (16 cores, 32 GB)2 × 4 cores, 32 GB
4 × ecs.c7.4xlarge (16 cores, 32 GB)4 × 8 cores, 64 GB
Image/VersionAlibaba Cloud Linux 3.2104 LTS 64-bitpolardb-2.4.0_5.4.19-20240927_xcluster5.4.19-20241010
This test configuration is designed for benchmarking only. It is not a production deployment recommendation.

Test scenarios

The tests cover three encryption ratios to simulate different data sensitivity requirements:

  • 20% column encryption: Encrypts key identifier fields, such as IDs and order numbers.

  • 50% column encryption: In addition to the 20% scenario, encrypts business data such as prices, dates, and quantities.

  • 100% column encryption: Encrypts all fields in all tables.

Detailed test results

Overall performance (100% column encryption vs. plaintext)

With all columns encrypted, TPS degrades by approximately 10% compared to plaintext across all tested configurations.

2 × (4-core, 32 GB)

ConcurrencyPlaintext TPS100% encrypted TPSPerformance overhead
6480,22372,24810%
12899,01988,46911%
256105,30994,75610%
512104,31395,9628%
102498,99095,1824%

4 × (8-core, 64 GB)

ConcurrencyPlaintext TPS100% encrypted TPSPerformance overhead
64108,58196,82811%
128184,293167,3809%
256263,538239,9139%
512292,481252,74113%
1024284,561252,43211%

Observations

At high concurrency (1,024 threads), overhead on the 2 × (4-core, 32 GB) instance drops to 4%—lower than at lower concurrency levels. At very high concurrency, the bottleneck shifts from encryption computation to contention and scheduling within the database and client. As a result, encryption overhead becomes a smaller fraction of total latency, and the measured TPS gap narrows.

EncJDBC driver overhead (plaintext)

Switching to the EncJDBC driver introduces additional client CPU consumption even when no encryption is active.

Test conditions: Client instance type is ecs.c7.4xlarge (16 cores, 32 GB); concurrency is 1,024.

SpecificationsStandard JDBC CPUEncJDBC CPUCPU increaseStandard JDBC memory (MB)EncJDBC memory (MB)Memory change
2 × 4 cores, 32 GB21.37%28.00%+31%1,0771,001−7.06%
4 × 8 cores, 64 GB47.09%69.90%+48%1,0481,024−2.29%

Plan for this baseline CPU increase when sizing your application instances, even before enabling encryption.

Client resource consumption with 100% encryption

Enabling 100% column encryption significantly increases client-side CPU usage. Make sure your application has enough CPU headroom to absorb this increase.

Test conditions: Concurrency is 1,024.

SpecificationsCPU before encryptionCPU with 100% encryptionCPU increase
2 × (4-core, 32 GB)13.01%33.45%+157%
4 × (4-core, 32 GB)20.60%32.07%+56%
2 × (8-core, 64 GB)19.93%36.77%+85%
4 × (8-core, 64 GB)17.73%43.59%+146%

Full encryption increases client CPU consumption by 56%–157%, with the peak exceeding 1.5×.

Impact of encryption ratio on performance and resources

The number of encrypted columns directly affects both TPS and client CPU usage.

Test conditions: Concurrency is 1,024.

2 × (4-core, 32 GB)

Encryption ratioClient CPUTPSPerformance overhead
20%20.01%96,7922%
50%22.23%96,6392%
100%33.45%95,1824%

4 × (8-core, 64 GB)

Encryption ratioClient CPUTPSPerformance overhead
20%22.50%276,6403%
50%28.80%276,6408%
100%43.59%276,64011%

Encrypt only the columns that contain sensitive data. Encrypting 20%–50% of columns delivers strong data protection at 2%–8% TPS overhead, which is significantly lower than the 4%–11% overhead of 100% encryption.

Appendix: test steps

Applicable configurations

This appendix describes how to reproduce the benchmark results in this report. The test uses the following configurations:

Instance sizeWarehousesData volumeConcurrency
2 × (4-core, 32 GB)1,000~300 million rows (order_line)64–1,024
4 × (8-core, 64 GB)1,000~300 million rows (order_line)64–1,024

If your instance size or workload differs significantly, the absolute TPS values will not match, but the relative overhead percentages should remain representative.

Step 1: Prepare the ECS client

Prepare an ECS instance. This instance is used for data import and running the benchmark.

Supported architectures and operating systems:
x86: CentOS 7.9 or later, Alibaba Cloud Linux 3, Ubuntu 18.0 or later
ARM: CentOS 8.0 or later
Deploy the ECS instance in a VPC. Note the VPC name and ID—all subsequent resources must use the same VPC.
Assign a public IP address to the ECS instance so you can access the benchmark tool from it.

Step 2: Prepare the PolarDB-X instance

  1. Create a PolarDB-X instance. Deploy it in the same VPC as the ECS instance.

  2. Add the internal IP address of the ECS instance to the whitelist of the PolarDB-X instance.

  3. Create the test database:

    CREATE DATABASE tpcc_1000 MODE = 'auto';

Step 3: Prepare the benchmark data

  1. Download and extract the benchmark tool. On the ECS instance, download the benchmarksql.tar.gz package and extract it:

    tar xzvf benchmarksql.tar.gz
  2. Download the EncJDBC driver. Download aliyun-encdb-mysql-jdbc-1.0.9-2-20240910.094626-1.jar and move it to the benchmarksql/lib folder.

  3. Configure the test parameters. Go to the benchmarksql/run folder and edit props.mysql:

    PlaceholderDescriptionExample
    {HOST}PolarDB-X instance endpointpc-xxx.polardbx.rds.aliyuncs.com
    {PORT}Connection port3306
    {MEK}Master encryption key (MEK): a 32-character hexadecimal stringa1b2c3d4e5f6...
    {ENC_ALGO}Symmetric encryption algorithmSM4_128_GCM
    {USER}Database usernametpcc_user
    {PASSWORD}Database password
    ParameterDescription
    warehousesNumber of TPC-C warehouses, which determines data volume
    loadWorkersNumber of concurrent threads for data import
    terminalsNumber of concurrent threads for the test run
    runMinsTest duration in minutes
    cd benchmarksql/run
    vi props.mysql

    Update the configuration to match your environment:

    db=mysql
    driver=com.aliyun.encdb.mysql.jdbc.EncDriver
    conn=jdbc:mysql:encdb://{HOST}:{PORT}/tpcc_1000_enc?/MEK={MEK}&ENC_ALGO={ENC_ALGO}&useSSL=false&useServerPrepStmts=false&useConfigs=maxPerformance&rewriteBatchedStatements=true
    user={USER}
    password={PASSWORD}
    
    warehouses=1000
    loadWorkers=100
    
    terminals=128
    //To run specified transactions per terminal- runMins must equal zero
    runTxnsPerTerminal=0
    //To run for specified minutes- runTxnsPerTerminal must equal zero
    runMins=5
    //Number of total transactions per minute
    limitTxnsPerMin=0
    
    //Set to true to run in 4.x compatible mode. Set to false to use the
    //entire configured database evenly.
    terminalWarehouseFixed=true
    
    //The following five values must add up to 100
    //The default percentages of 45, 43, 4, 4 & 4 match the TPC-C spec
    newOrderWeight=45
    paymentWeight=43
    orderStatusWeight=4
    deliveryWeight=4
    stockLevelWeight=4
    
    // Directory name to create for collecting detailed result data.
    // Comment this out to suppress.
    resultDirectory=my_result_%tY-%tm-%td_%tH%tM%tS
    
    // osCollectorScript=./misc/os_collector_linux.py
    // osCollectorInterval=1
    // osCollectorSSHAddr=user@dbhost
    // osCollectorDevices=net_eth0 blk_sda

    Replace the following placeholders with your values: Other key parameters:

  4. Import the test data.

    cd benchmarksql/run/sql.common
    cp tableCreates.sql.auto tableCreates.sql
    cd ..
    nohup ./runDatabaseBuild.sh props.mysql &
  5. Verify data integrity. After the import completes, connect to the PolarDB-X instance and run the following SQL queries. If all queries return an empty result set, the data was imported successfully.

    SELECT a.* FROM (SELECT w_id, w_ytd FROM bmsql_warehouse) a LEFT JOIN (SELECT d_w_id, sum(d_ytd) AS d_ytd_sum FROM bmsql_district GROUP BY d_w_id) b ON a.w_id = b.d_w_id AND a.w_ytd = b.d_ytd_sum WHERE b.d_w_id IS NULL;
    
    SELECT a.* FROM (SELECT d_w_id, d_id, D_NEXT_O_ID - 1 AS d_n_o_id FROM bmsql_district) a LEFT JOIN (SELECT o_w_id, o_d_id, max(o_id) AS o_id_max FROM bmsql_oorder GROUP BY  o_w_id, o_d_id) b ON a.d_w_id = b.o_w_id AND a.d_id = b.o_d_id AND a.d_n_o_id = b.o_id_max WHERE b.o_w_id IS NULL;
    
    SELECT a.* FROM (SELECT d_w_id, d_id, D_NEXT_O_ID - 1 AS d_n_o_id FROM bmsql_district) a LEFT JOIN (SELECT no_w_id, no_d_id, max(no_o_id) AS no_id_max FROM bmsql_new_order GROUP BY no_w_id, no_d_id) b ON a.d_w_id = b.no_w_id AND a.d_id = b.no_d_id AND a.d_n_o_id = b.no_id_max WHERE b.no_id_max IS NULL;
    
    SELECT * FROM (SELECT (count(no_o_id)-(max(no_o_id)-min(no_o_id)+1)) AS diff FROM bmsql_new_order GROUP BY no_w_id, no_d_id) a WHERE diff != 0;
    
    SELECT a.* FROM (SELECT o_w_id, o_d_id, sum(o_ol_cnt) AS o_ol_cnt_cnt FROM bmsql_oorder  GROUP BY o_w_id, o_d_id) a LEFT JOIN (SELECT ol_w_id, ol_d_id, count(ol_o_id) AS ol_o_id_cnt FROM bmsql_order_line GROUP BY ol_w_id, ol_d_id) b ON a.o_w_id = b.ol_w_id AND a.o_d_id = b.ol_d_id AND a.o_ol_cnt_cnt = b.ol_o_id_cnt WHERE b.ol_w_id IS NULL;
    
    SELECT a.* FROM (SELECT d_w_id, sum(d_ytd) AS d_ytd_sum FROM bmsql_district GROUP BY d_w_id) a LEFT JOIN (SELECT w_id, w_ytd FROM bmsql_warehouse) b ON a.d_w_id = b.w_id AND a.d_ytd_sum = b.w_ytd WHERE b.w_id IS NULL;

Step 4: Run the test

Run the TPC-C benchmark:

cd benchmarksql/run
./runBenchmark.sh props.mysql

During the test, the terminal shows live throughput in transactions per minute (tpmC). After the 5-minute test completes, the tool prints a summary with the final average tpmC value. To convert tpmC to TPS, divide by 60.

The output looks similar to the following. Live progress lines scroll continuously during the run; the summary line appears after the test exits.

Running for 300 seconds. Progress is logged every 60 seconds.

Term-00, Running Average tpmTOTAL: 5,400,000.00    Current tpmTOTAL: 5,400,000    Memory Usage: 1001MB / 1877MB
...
Term-00, Measured tpmC (NewOrders) = 1,609,920.00
Term-00, Measured tpmTotal = 3,574,200.00
Term-00, Session Start     = yyyy-MM-dd HH:mm:ss
Term-00, Session End       = yyyy-MM-dd HH:mm:ss
Term-00, Transaction Count = nnnnnnn
Continuous output scrolling to the terminal during the test run is normal. The test is complete when the summary line appears and the process exits.