Hot rows are database rows updated frequently by many concurrent transactions. Each update locks the row until the transaction commits or rolls back, so requests queue up sequentially — table sharding and hardware upgrades provide little relief. In high-concurrency workloads like flash sales and inventory deduction, this lock contention causes latency to spike and throughput to plateau.
PolarDB addresses this at the kernel level by automatically grouping concurrent updates to the same row into batches and processing those batches through a pipeline. This approach eliminates redundant lock waits and B-tree traversals, increasing throughput by up to 50x under high concurrency.
When to use hot row optimization
Hot row optimization is most effective when all of the following conditions apply:
Multiple concurrent transactions repeatedly update the same row — for example, inventory deduction during a flash sale
The workload uses
autocommitmode or can be adapted to use theCOMMIT_ON_SUCCESShintThe database runs on a multi-core CPU where pipeline parallelism provides benefit
The feature does not apply in the following scenarios:
The table containing the hot row is a partitioned table
A trigger is defined on the table
The statement queue is used for the hot row
Global binary logging is enabled but session-level binary logging is disabled —
UPDATEstatements in this configuration bypass hot row optimization
How it works
PolarDB identifies UPDATE statements marked with autocommit or the COMMIT_ON_SUCCESS hint. Within a short time window, it collects matching updates and hashes them into buckets by primary key or unique key. Updates that hash to the same bucket form a Group Update.
Pipeline processing — from sequential to parallel
Two execution units alternate: while Unit A's group commits, Unit B collects the next batch of updates. This eliminates idle time between commits and makes full use of multi-core CPUs.
For example: 200 concurrent updates to inventory WHERE id = 1 arrive within a 100 µs window. Instead of 200 sequential lock-acquire → update → commit cycles, PolarDB batches them into two groups of 100 and pipelines the groups. Overall latency drops from O(n) to O(1) for the batch.
Instant lock acquisition for Followers
Within each group, the first update operation is the Leader — it reads the target row and acquires a row lock. All subsequent operations in the group are Followers. When a Follower requests the lock and detects that the Leader already holds it, it acquires the lock immediately without waiting. This eliminates lock-wait overhead and deadlock detection for the entire group.
Fewer B-tree traversals via Row Cache
Only the Leader traverses the B-tree index to locate the target row. The row is then stored in a Row Cache in memory. Followers in the same group retrieve the row directly from Row Cache, skipping the index traversal entirely.
Prerequisites
Before you enable hot row optimization, ensure that:
Your PolarDB cluster runs one of the following versions:
PolarDB for MySQL 5.6, minor engine version 20200601 or later
PolarDB for MySQL 5.7, minor engine version 5.7.1.0.17 or later
PolarDB for MySQL 8.0, minor engine version 8.0.1.1.10 or later
Binary logging is enabled on the cluster
The cluster parameter rds_ic_reduce_hint_enable is set to OFF:
PolarDB for MySQL 5.6 and 8.0: OFF by default — no action needed
PolarDB for MySQL 5.7: ON by default — change the parameter value to OFF before proceeding
All cluster parameters in the PolarDB console carry a loose_ prefix for MySQL configuration compatibility. To modify rds_ic_reduce_hint_enable, select loose_rds_ic_reduce_hint_enable in the console.
Enable hot row optimization
Step 1: Enable the feature parameter
In the PolarDB console, configure the cluster parameter hotspot (console name: loose_hotspot):
| Parameter | Description |
|---|---|
| hotspot | Enables or disables hot row optimization. Valid values: ON (enabled) / OFF (disabled, default) |
To modify hotspot in the console, select the parameter with the loose_ prefix: loose_hotspot.
Step 2: Add hint syntax to your UPDATE statements
Add the following hints to UPDATE statements that target hot rows. The hint must appear in the last SQL statement of the transaction, because COMMIT_ON_SUCCESS automatically commits on success.
| Hint | Required | Description |
|---|---|---|
| COMMIT_ON_SUCCESS | Required | Commits the transaction if the update succeeds |
| ROLLBACK_ON_FAIL | Optional | Rolls back the transaction if the update fails |
| TARGET_AFFECT_ROW(1) | Optional | Fails the update if it affects more than one row |
Example — increment a counter column on a specific row:
UPDATE /*+ COMMIT_ON_SUCCESS ROLLBACK_ON_FAIL TARGET_AFFECT_ROW(1) */ sbtest SET c = c + 1 WHERE id = 1;Step 3: Verify the feature is active
Run the following query to confirm hot row optimization is processing Group Updates:
SHOW GLOBAL STATUS LIKE 'Group_update%';A non-zero count in the result confirms the feature is active.
Tune parameters
View the current hot row parameter values:
SHOW VARIABLES LIKE "hotspot%";Sample output:
+------------------------------+-------+
| Variable_name | Value |
+------------------------------+-------+
| hotspot | OFF |
| hotspot_for_autocommit | OFF |
| hotspot_lock_type | OFF |
| hotspot_update_max_wait_time | 100 |
+------------------------------+-------+The following parameters cannot be modified in the PolarDB console. To change them, go to Quota Center, find the quota PolarDB hotspot row parameter adjustment, and click Apply in the Actions column.
| Parameter | Description | Default |
|---|---|---|
| hotspot_for_autocommit | Applies hot row optimization to UPDATE statements in autocommit mode. Valid values: ON / OFF | OFF |
| hotspot_update_max_wait_time | Maximum time the Leader waits for Followers to join the group. Unit: microseconds (µs) | 100 µs |
| hotspot_lock_type | Enables the new row lock type during Group Updates. When ON, Followers that request a lock on the same hot row acquire it immediately without waiting. Valid values: ON / OFF | OFF |
Performance testing
Test environment
Tool: Sysbench — an open-source, multi-threaded benchmarking tool supporting Lua-based test scripts
Test scenario: One hot row, 8-core CPU
Table definition:
CREATE TABLE sbtest (id INT UNSIGNED NOT NULL, c BIGINT UNSIGNED NOT NULL, PRIMARY KEY (id));Test statement:
UPDATE /*+ COMMIT_ON_SUCCESS ROLLBACK_ON_FAIL TARGET_AFFECT_ROW(1) */ sbtest SET c = c + 1 WHERE id = 1;
Results summary
| Version | Peak improvement (high concurrency) |
|---|---|
| PolarDB for MySQL 5.6 | ~50x QPS improvement |
| PolarDB for MySQL 5.7 | ~35x QPS improvement |
| PolarDB for MySQL 8.0 | ~26x QPS improvement |
PolarDB for MySQL 5.6
QPS

| Concurrency | 1 | 8 | 16 | 32 | 64 | 128 | 256 | 512 | 1024 |
|---|---|---|---|---|---|---|---|---|---|
| Optimization disabled | 1365.31 | 1863.94 | 1866.6 | 1862.64 | 1867.32 | 1832.51 | 1838.31 | 1819.52 | 1833.2 |
| Optimization enabled | 1114.79 | 7000.19 | 12717.32 | 22029.48 | 43096.06 | 61349.7 | 83098.69 | 90860.94 | 87689 |
PolarDB for MySQL 5.7
QPS
QPS

| Concurrency | 1 | 8 | 16 | 32 | 64 | 128 | 256 | 512 | 1024 |
|---|---|---|---|---|---|---|---|---|---|
| Optimization disabled | 1348.49 | 1892.29 | 1889.77 | 1895.86 | 1875.2 | 1850.26 | 1843.62 | 1849.92 | 1835.68 |
| Optimization enabled | 1104.9 | 6886.89 | 12485.17 | 16003.23 | 16460.31 | 16548.86 | 27920.89 | 47893.96 | 66500.92 |
lat95th (95th percentile latency)
95th percentile latency (ms)

| Concurrency | 1 | 8 | 16 | 32 | 64 | 128 | 256 | 512 | 1024 |
|---|---|---|---|---|---|---|---|---|---|
| Optimization disabled | 0.9 | 5.47 | 9.91 | 18.95 | 36.89 | 73.13 | 164.45 | 297.92 | 590.56 |
| Optimization enabled | 1.08 | 1.44 | 1.58 | 3.25 | 5.28 | 9.56 | 12.08 | 13.22 | 18.28 |
PolarDB for MySQL 8.0
QPS
QPS

| Concurrency | 1 | 8 | 16 | 32 | 64 | 128 | 256 | 512 | 1024 |
|---|---|---|---|---|---|---|---|---|---|
| Optimization disabled | 1559.14 | 2103.82 | 2116.4 | 2082.1 | 2079.74 | 2031.64 | 1993.09 | 1977.6 | 1983.61 |
| Optimization enabled | 1237.28 | 7443.04 | 12244.19 | 15529.52 | 23041.15 | 33931.18 | 53924.24 | 54598.6 | 50988.22 |
lat95th (95th percentile latency)
95th percentile latency (ms)

| Concurrency | 1 | 8 | 16 | 32 | 64 | 128 | 256 | 512 | 1024 |
|---|---|---|---|---|---|---|---|---|---|
| Optimization disabled | 0.8 | 5 | 8.9 | 17.32 | 33.12 | 66.84 | 153.02 | 287.38 | 549.52 |
| Optimization enabled | 0.97 | 1.34 | 1.89 | 3.19 | 4.82 | 5.88 | 7.17 | 13.46 | 28.16 |
Appendix: Run the performance test
Prepare an ECS instance and install Sysbench.
Create a file named
oltp_inventory.luain thesrc/luafolder of the Sysbench source code:#!/usr/bin/env sysbench -- it is to test inventory_hotspot performance sysbench.cmdline.options= { inventory_hotspot = {"enable ali inventory hotspot", 'off'}, tables = {"table number", 1}, table_size = {"table size", 1}, oltp_skip_trx = {'skip trx', true}, hotspot_rows = {'hotspot row number', 1} } function cleanup() drv = sysbench.sql.driver() con = drv:connect() for i = 1, sysbench.opt.tables do print(string.format("drop table sbtest%d ...", i)) drop_table(drv, con, i) end end function drop_table(drv, con, table_id) local query query = string.format("drop table if exists sbtest%d ", table_id) con:query(query) end function create_table(drv, con, table_id) local query query = string.format("CREATE TABLE sbtest%d (id INT UNSIGNED NOT NULL, c BIGINT UNSIGNED NOT NULL, PRIMARY KEY (id))", table_id) con:query(query) for i=1, sysbench.opt.table_size do con:query("INSERT INTO sbtest" .. table_id .. "(id, c) values (" ..i.. ", 1)") end end function prepare() drv = sysbench.sql.driver() con = drv:connect() for i = 1, sysbench.opt.tables do print(string.format("Creating table sbtest%d ...", i)) create_table(drv, con, i) end end function thread_init() drv = sysbench.sql.driver() con = drv:connect() begin_query = 'BEGIN' commit_query = 'COMMIT' end function event() local table_name table_name = "sbtest" .. sysbench.rand.uniform(1, sysbench.opt.tables) local min_line = math.min(sysbench.opt.table_size, sysbench.opt.hotspot_rows) local row_id = sysbench.rand.uniform(1, min_line) if not sysbench.opt.oltp_skip_trx then con:query(begin_query) end if (sysbench.opt.inventory_hotspot == "on") then con:query("UPDATE COMMIT_ON_SUCCESS ROLLBACK_ON_FAIL TARGET_AFFECT_ROW 1 " .. table_name .. " SET c=c+1 WHERE id =" .. row_id) else con:query("UPDATE " .. table_name .. " SET c=c+1 WHERE id = " .. row_id) end if not sysbench.opt.oltp_skip_trx then if (sysbench.opt.inventory_hotspot == "on") then con:query(commit_query) end end end function thread_done() con:disconnect() endPrepare the test data:
sysbench --hotspot_rows=1 --histogram=on --mysql-user=<user> --inventory_hotspot=on --mysql-host=<host> --threads=1 --report-interval=1 --mysql-password=<password> --tables=1 --table-size=1 --oltp_skip_trx=true --db-driver=mysql --percentile=95 --time=300 --mysql-port=<port> --events=0 --mysql-db=<database> oltp_inventory prepareRun the test:
sysbench --db-driver=mysql --mysql-host=<host> --mysql-port=<port> --mysql-user=<user> --mysql-password=<password> --mysql-db=<database> --range-selects=0 --table_size=25000 --tables=250 --events=0 --time=600 --rand-type=uniform --threads=<threads> oltp_read_only run
Input parameters
| Parameter | Description |
|---|---|
mysql-host | Cluster endpoint |
mysql-port | Port of the cluster endpoint |
mysql-user | Username for the cluster |
mysql-password | Password for the cluster user |
mysql-db | Database name |
Output parameters
| Parameter | Displayed content | Description |
|---|---|---|
tables | Number of data tables | Total number of tables used in the test |
table_size | Number of rows per table | Number of records in each table |
Data size | Table data size (MB or GB) | Size of the data in the table |
threads | Number of concurrent threads | Number of configured threads |
Thread status | Real-time thread status | Running status of threads during the test |