All Products
Search
Document Center

PolarDB:Hot row optimization

Last Updated:Mar 28, 2026

Hot rows are database rows updated frequently by many concurrent transactions. Each update locks the row until the transaction commits or rolls back, so requests queue up sequentially — table sharding and hardware upgrades provide little relief. In high-concurrency workloads like flash sales and inventory deduction, this lock contention causes latency to spike and throughput to plateau.

PolarDB addresses this at the kernel level by automatically grouping concurrent updates to the same row into batches and processing those batches through a pipeline. This approach eliminates redundant lock waits and B-tree traversals, increasing throughput by up to 50x under high concurrency.

When to use hot row optimization

Hot row optimization is most effective when all of the following conditions apply:

  • Multiple concurrent transactions repeatedly update the same row — for example, inventory deduction during a flash sale

  • The workload uses autocommit mode or can be adapted to use the COMMIT_ON_SUCCESS hint

  • The database runs on a multi-core CPU where pipeline parallelism provides benefit

The feature does not apply in the following scenarios:

  • The table containing the hot row is a partitioned table

  • A trigger is defined on the table

  • The statement queue is used for the hot row

  • Global binary logging is enabled but session-level binary logging is disabledUPDATE statements in this configuration bypass hot row optimization

How it works

PolarDB identifies UPDATE statements marked with autocommit or the COMMIT_ON_SUCCESS hint. Within a short time window, it collects matching updates and hashes them into buckets by primary key or unique key. Updates that hash to the same bucket form a Group Update.

image

Pipeline processing — from sequential to parallel

Two execution units alternate: while Unit A's group commits, Unit B collects the next batch of updates. This eliminates idle time between commits and makes full use of multi-core CPUs.

For example: 200 concurrent updates to inventory WHERE id = 1 arrive within a 100 µs window. Instead of 200 sequential lock-acquire → update → commit cycles, PolarDB batches them into two groups of 100 and pipelines the groups. Overall latency drops from O(n) to O(1) for the batch.

Instant lock acquisition for Followers

Within each group, the first update operation is the Leader — it reads the target row and acquires a row lock. All subsequent operations in the group are Followers. When a Follower requests the lock and detects that the Leader already holds it, it acquires the lock immediately without waiting. This eliminates lock-wait overhead and deadlock detection for the entire group.

Fewer B-tree traversals via Row Cache

Only the Leader traverses the B-tree index to locate the target row. The row is then stored in a Row Cache in memory. Followers in the same group retrieve the row directly from Row Cache, skipping the index traversal entirely.

Prerequisites

Before you enable hot row optimization, ensure that:

  • Your PolarDB cluster runs one of the following versions:

    • PolarDB for MySQL 5.6, minor engine version 20200601 or later

    • PolarDB for MySQL 5.7, minor engine version 5.7.1.0.17 or later

    • PolarDB for MySQL 8.0, minor engine version 8.0.1.1.10 or later

  • Binary logging is enabled on the cluster

  • The cluster parameter rds_ic_reduce_hint_enable is set to OFF:

    • PolarDB for MySQL 5.6 and 8.0: OFF by default — no action needed

    • PolarDB for MySQL 5.7: ON by default — change the parameter value to OFF before proceeding

All cluster parameters in the PolarDB console carry a loose_ prefix for MySQL configuration compatibility. To modify rds_ic_reduce_hint_enable, select loose_rds_ic_reduce_hint_enable in the console.

Enable hot row optimization

Step 1: Enable the feature parameter

In the PolarDB console, configure the cluster parameter hotspot (console name: loose_hotspot):

ParameterDescription
hotspotEnables or disables hot row optimization. Valid values: ON (enabled) / OFF (disabled, default)
To modify hotspot in the console, select the parameter with the loose_ prefix: loose_hotspot.

Step 2: Add hint syntax to your UPDATE statements

Add the following hints to UPDATE statements that target hot rows. The hint must appear in the last SQL statement of the transaction, because COMMIT_ON_SUCCESS automatically commits on success.

HintRequiredDescription
COMMIT_ON_SUCCESSRequiredCommits the transaction if the update succeeds
ROLLBACK_ON_FAILOptionalRolls back the transaction if the update fails
TARGET_AFFECT_ROW(1)OptionalFails the update if it affects more than one row

Example — increment a counter column on a specific row:

UPDATE /*+ COMMIT_ON_SUCCESS ROLLBACK_ON_FAIL TARGET_AFFECT_ROW(1) */ sbtest SET c = c + 1 WHERE id = 1;

Step 3: Verify the feature is active

Run the following query to confirm hot row optimization is processing Group Updates:

SHOW GLOBAL STATUS LIKE 'Group_update%';

A non-zero count in the result confirms the feature is active.

Tune parameters

View the current hot row parameter values:

SHOW VARIABLES LIKE "hotspot%";

Sample output:

+------------------------------+-------+
| Variable_name                | Value |
+------------------------------+-------+
| hotspot                      | OFF   |
| hotspot_for_autocommit       | OFF   |
| hotspot_lock_type            | OFF   |
| hotspot_update_max_wait_time | 100   |
+------------------------------+-------+

The following parameters cannot be modified in the PolarDB console. To change them, go to Quota Center, find the quota PolarDB hotspot row parameter adjustment, and click Apply in the Actions column.

ParameterDescriptionDefault
hotspot_for_autocommitApplies hot row optimization to UPDATE statements in autocommit mode. Valid values: ON / OFFOFF
hotspot_update_max_wait_timeMaximum time the Leader waits for Followers to join the group. Unit: microseconds (µs)100 µs
hotspot_lock_typeEnables the new row lock type during Group Updates. When ON, Followers that request a lock on the same hot row acquire it immediately without waiting. Valid values: ON / OFFOFF

Performance testing

Test environment

  • Tool: Sysbench — an open-source, multi-threaded benchmarking tool supporting Lua-based test scripts

  • Test scenario: One hot row, 8-core CPU

  • Table definition:

    CREATE TABLE sbtest (id INT UNSIGNED NOT NULL, c BIGINT UNSIGNED NOT NULL, PRIMARY KEY (id));
  • Test statement:

    UPDATE /*+ COMMIT_ON_SUCCESS ROLLBACK_ON_FAIL TARGET_AFFECT_ROW(1) */ sbtest SET c = c + 1 WHERE id = 1;

Results summary

VersionPeak improvement (high concurrency)
PolarDB for MySQL 5.6~50x QPS improvement
PolarDB for MySQL 5.7~35x QPS improvement
PolarDB for MySQL 8.0~26x QPS improvement

PolarDB for MySQL 5.6

QPS

image.png
Concurrency181632641282565121024
Optimization disabled1365.311863.941866.61862.641867.321832.511838.311819.521833.2
Optimization enabled1114.797000.1912717.3222029.4843096.0661349.783098.6990860.9487689

PolarDB for MySQL 5.7

QPS

QPS

image.png
Concurrency181632641282565121024
Optimization disabled1348.491892.291889.771895.861875.21850.261843.621849.921835.68
Optimization enabled1104.96886.8912485.1716003.2316460.3116548.8627920.8947893.9666500.92

lat95th (95th percentile latency)

95th percentile latency (ms)

image
Concurrency181632641282565121024
Optimization disabled0.95.479.9118.9536.8973.13164.45297.92590.56
Optimization enabled1.081.441.583.255.289.5612.0813.2218.28

PolarDB for MySQL 8.0

QPS

QPS

image
Concurrency181632641282565121024
Optimization disabled1559.142103.822116.42082.12079.742031.641993.091977.61983.61
Optimization enabled1237.287443.0412244.1915529.5223041.1533931.1853924.2454598.650988.22

lat95th (95th percentile latency)

95th percentile latency (ms)

image
Concurrency181632641282565121024
Optimization disabled0.858.917.3233.1266.84153.02287.38549.52
Optimization enabled0.971.341.893.194.825.887.1713.4628.16

Appendix: Run the performance test

  1. Prepare an ECS instance and install Sysbench.

  2. Create a file named oltp_inventory.lua in the src/lua folder of the Sysbench source code:

    #!/usr/bin/env sysbench
    -- it is to test inventory_hotspot performance
    
    sysbench.cmdline.options= {
        inventory_hotspot = {"enable ali inventory hotspot", 'off'},
        tables = {"table number", 1},
        table_size = {"table size", 1},
        oltp_skip_trx = {'skip trx', true},
        hotspot_rows = {'hotspot row number', 1}
    }
    
    
    function cleanup()
        drv = sysbench.sql.driver()
        con = drv:connect()
        for i = 1, sysbench.opt.tables do
            print(string.format("drop table sbtest%d ...", i))
            drop_table(drv, con, i)
        end
    end
    
    function drop_table(drv, con, table_id)
        local query
        query = string.format("drop table if exists sbtest%d ", table_id)
        con:query(query)
    end
    
    function create_table(drv, con, table_id)
        local query
        query = string.format("CREATE TABLE sbtest%d (id INT UNSIGNED NOT NULL, c BIGINT UNSIGNED NOT NULL, PRIMARY KEY (id))", table_id)
        con:query(query)
        for i=1, sysbench.opt.table_size do
            con:query("INSERT INTO sbtest" .. table_id .. "(id, c) values (" ..i.. ", 1)")
        end
    end
    
    function prepare()
        drv = sysbench.sql.driver()
        con = drv:connect()
        for i = 1, sysbench.opt.tables do
            print(string.format("Creating table sbtest%d ...", i))
            create_table(drv, con, i)
        end
    end
    
    function thread_init()
        drv = sysbench.sql.driver()
        con = drv:connect()
        begin_query = 'BEGIN'
        commit_query = 'COMMIT'
    end
    
    function event()
        local table_name
        table_name = "sbtest" .. sysbench.rand.uniform(1, sysbench.opt.tables)
        local min_line = math.min(sysbench.opt.table_size, sysbench.opt.hotspot_rows)
        local row_id = sysbench.rand.uniform(1, min_line)
    
        if not sysbench.opt.oltp_skip_trx then
            con:query(begin_query)
        end
    
        if (sysbench.opt.inventory_hotspot == "on") then
            con:query("UPDATE COMMIT_ON_SUCCESS ROLLBACK_ON_FAIL TARGET_AFFECT_ROW 1 " .. table_name .. " SET c=c+1 WHERE id =" .. row_id)
        else
            con:query("UPDATE " .. table_name .. " SET c=c+1 WHERE id = " .. row_id)
        end
    
        if not sysbench.opt.oltp_skip_trx then
            if (sysbench.opt.inventory_hotspot == "on") then
                con:query(commit_query)
            end
        end
    end
    
    function thread_done()
       con:disconnect()
    end
  3. Connect to the cluster.

  4. Prepare the test data:

    sysbench --hotspot_rows=1 --histogram=on --mysql-user=<user> --inventory_hotspot=on --mysql-host=<host> --threads=1 --report-interval=1 --mysql-password=<password> --tables=1 --table-size=1 --oltp_skip_trx=true --db-driver=mysql --percentile=95 --time=300 --mysql-port=<port> --events=0 --mysql-db=<database> oltp_inventory prepare
  5. Run the test:

    sysbench --db-driver=mysql --mysql-host=<host> --mysql-port=<port> --mysql-user=<user> --mysql-password=<password> --mysql-db=<database> --range-selects=0 --table_size=25000 --tables=250 --events=0 --time=600 --rand-type=uniform --threads=<threads> oltp_read_only run

Input parameters

ParameterDescription
mysql-hostCluster endpoint
mysql-portPort of the cluster endpoint
mysql-userUsername for the cluster
mysql-passwordPassword for the cluster user
mysql-dbDatabase name

Output parameters

ParameterDisplayed contentDescription
tablesNumber of data tablesTotal number of tables used in the test
table_sizeNumber of rows per tableNumber of records in each table
Data sizeTable data size (MB or GB)Size of the data in the table
threadsNumber of concurrent threadsNumber of configured threads
Thread statusReal-time thread statusRunning status of threads during the test