All Products
Search
Document Center

PolarDB:BRIN indexes

Last Updated:Mar 28, 2026

BRIN (Block Range Index) stores summary statistics for ranges of data blocks rather than indexing individual row IDs like a B-tree. This makes BRIN indexes extremely compact and imposes minimal overhead on writes, updates, and deletes — at the cost of less precise row location.

How it works

A BRIN index divides the table into contiguous block ranges and records summary statistics for each range. At query time, the planner skips ranges whose statistics cannot satisfy the query condition and scans only the candidate ranges as a bitmap heap scan.

Because the index stores summaries rather than exact row locations, it is lossy: the executor must recheck every row in a qualifying block range against the query condition. This is why execution plans for BRIN queries show Recheck Cond and Rows Removed by Index Recheck — both are expected behavior, not errors.

The pages_per_range storage parameter controls the trade-off between index size and precision:

pages_per_range valueIndex sizePrecisionBlocks skipped
Small (e.g., 1)LargerHigherMore
LargeSmallerLowerFewer

Supported operators

BRIN indexes support the following comparison operators: <, <=, =, >=, >.

When to use BRIN indexes

BRIN works best when the column values are naturally correlated with their physical storage order — that is, rows inserted in sequence tend to have sequentially increasing column values. Common examples:

  • Append-only time-series data: log entries, sensor readings, or event streams inserted in timestamp order

  • Auto-incrementing IDs: rows inserted sequentially so that the physical order matches the ID order

BRIN is not a replacement for B-tree. For point lookups or queries on randomly distributed data, use a B-tree index instead.

Example: index size and query performance

The following example creates a 1,000,000-row table and compares BRIN against B-tree indexes in terms of storage size and query execution.

Set up

Create the table and insert 1,000,000 rows:

CREATE TABLE t_brin (id int, info text, crt_time timestamp);
INSERT INTO t_brin SELECT generate_series(1,1000000), md5(random()::text), clock_timestamp();

Verify the physical row order (the ctid column reflects the heap location):

SELECT ctid,* FROM t_brin limit 3;

Output:

ctid  | id | info                             | crt_time
(0,1) |  1 | 81c3f4f603c0c17e45778b2dd2d72f4d | 2024-11-06 09:26:57.549121
(0,2) |  2 | b4b77e95a1580480107b038776b3cc9c | 2024-11-06 09:26:57.551548
(0,3) |  3 | 6ebf5ebdd3df3428c279de2d5c7aab9f | 2024-11-06 09:26:57.551558

The id and crt_time columns both increase monotonically with physical row order, which makes this table a good candidate for BRIN indexes.

Create two BRIN indexes and two B-tree indexes for comparison. Both BRIN indexes use pages_per_range=1 for maximum precision:

-- BRIN indexes
CREATE INDEX idx_t_brin_1 ON t_brin USING brin (id) WITH (pages_per_range=1);
CREATE INDEX idx_t_brin_2 ON t_brin USING brin (crt_time) WITH (pages_per_range=1);

-- B-tree indexes
CREATE INDEX idx_t_brin_3 ON t_brin(id);
CREATE INDEX idx_t_brin_4 ON t_brin(crt_time);

Check the index sizes:

\di+

Output:

Schema   | Name         | Type  | Owner    | Table  | Size   | Description
wangjian | idx_t_brin_1 | index | wangjian | t_brin | 272 kB |
wangjian | idx_t_brin_2 | index | wangjian | t_brin | 352 kB |
wangjian | idx_t_brin_3 | index | wangjian | t_brin | 21 MB  |
wangjian | idx_t_brin_4 | index | wangjian | t_brin | 21 MB  |

Both BRIN indexes are under 400 kB, compared to 21 MB each for the B-tree indexes — roughly an 80x difference in storage.

Query on id

With the B-tree index idx_t_brin_3 present, the planner chooses a precise index scan:

EXPLAIN (analyze, verbose, timing, costs, buffers)
SELECT * FROM t_brin WHERE id BETWEEN 100 AND 200;

Output:

Index Scan using idx_t_brin_3 on public.t_brin  (cost=0.42..5.79 rows=107 width=45) (actual time=0.006..0.020 rows=101 loops=1)
  Output: id, info, crt_time
  Index Cond: ((t_brin.id >= 100) AND (t_brin.id <= 200))
  Buffers: shared hit=5 (main=5 vm=0 fsm=0)
Query Identifier: -5761088690410512151
Planning:
  Buffers: shared hit=40 (main=38 vm=2 fsm=0)
Planning Time: 0.232 ms
Execution Time: 0.045 ms

Drop the B-tree index and run the same query. The planner falls back to the BRIN index and uses a bitmap heap scan:

DROP INDEX idx_t_brin_3;

EXPLAIN (analyze, verbose, timing, costs, buffers)
SELECT * FROM t_brin WHERE id BETWEEN 100 AND 200;

Output:

Bitmap Heap Scan on public.t_brin  (cost=37.83..241.69 rows=107 width=45) (actual time=1.333..1.351 rows=101 loops=1)
  Output: id, info, crt_time
  Recheck Cond: ((t_brin.id >= 100) AND (t_brin.id <= 200))
  Rows Removed by Index Recheck: 93
  Heap Blocks: lossy=2
  Buffers: shared hit=51 (main=51 vm=0 fsm=0)
  ->  Bitmap Index Scan on idx_t_brin_1  (cost=0.00..37.80 rows=186 width=0) (actual time=1.327..1.327 rows=20 loops=1)
        Index Cond: ((t_brin.id >= 100) AND (t_brin.id <= 200))
        Buffers: shared hit=49 (main=49 vm=0 fsm=0)
Query Identifier: -5761088690410512151
Planning:
  Buffers: shared hit=5 (main=5 vm=0 fsm=0) dirtied=1 (main=0 vm=0 fsm=0)
Planning Time: 0.054 ms
Execution Time: 1.381 ms

Rows Removed by Index Recheck: 93 and Heap Blocks: lossy=2 confirm expected BRIN behavior: the index identified two candidate block ranges, and the executor rechecked all rows in those ranges, discarding 93 that did not match.

Query on crt_time

With the B-tree index idx_t_brin_4 present:

EXPLAIN (analyze, verbose, timing, costs, buffers)
SELECT * FROM t_brin WHERE crt_time BETWEEN '2017-06-27 22:50:19.172224' AND '2017-06-27 22:50:19.182224';

Output:

Index Scan using idx_t_brin_4 on public.t_brin  (cost=0.42..2.64 rows=1 width=45) (actual time=0.003..0.003 rows=0 loops=1)
  Output: id, info, crt_time
  Index Cond: ((t_brin.crt_time >= '2017-06-27 22:50:19.172224'::timestamp without time zone) AND (t_brin.crt_time <= '2017-06-27 22:50:19.182224'::timestamp without time zone))
  Buffers: shared hit=3 (main=3 vm=0 fsm=0)
Query Identifier: 2646955540723493075
Planning:
  Buffers: shared hit=9 (main=7 vm=2 fsm=0)
Planning Time: 0.061 ms
Execution Time: 0.019 ms

Drop the B-tree index and run the same query using the BRIN index:

DROP INDEX idx_t_brin_4;

EXPLAIN (analyze, verbose, timing, costs, buffers)
SELECT * FROM t_brin WHERE crt_time BETWEEN '2017-06-27 22:50:19.172224' AND '2017-06-27 22:50:19.182224';

Output:

Bitmap Heap Scan on public.t_brin  (cost=49.90..152.73 rows=1 width=45) (actual time=1.449..1.449 rows=0 loops=1)
  Output: id, info, crt_time
  Recheck Cond: ((t_brin.crt_time >= '2017-06-27 22:50:19.172224'::timestamp without time zone) AND (t_brin.crt_time <= '2017-06-27 22:50:19.182224'::timestamp without time zone))
  Buffers: shared hit=65 (main=65 vm=0 fsm=0)
  ->  Bitmap Index Scan on idx_t_brin_2  (cost=0.00..49.90 rows=93 width=0) (actual time=1.447..1.447 rows=0 loops=1)
        Index Cond: ((t_brin.crt_time >= '2017-06-27 22:50:19.172224'::timestamp without time zone) AND (t_brin.crt_time <= '2017-06-27 22:50:19.182224'::timestamp without time zone))
        Buffers: shared hit=65 (main=65 vm=0 fsm=0)
Query Identifier: 2646955540723493075
Planning:
  Buffers: shared hit=5 (main=5 vm=0 fsm=0) dirtied=1 (main=0 vm=0 fsm=0)
Planning Time: 0.058 ms
Execution Time: 1.477 ms

The BRIN index narrows the scan to qualifying block ranges, with the executor performing a recheck pass over those rows.