The query planner assigns a numeric cost to every execution plan and picks the lowest one. This topic explains the cost model parameters and the formulas the planner uses for full table scans and index scans, so you can verify the numbers in EXPLAIN output and tune the planner for your workload.
Cost values are dimensionless units — they represent relative effort, not wall-clock time. Only the ratio between costs matters. By default,seq_page_costis set to1.0, and all other parameters are expressed relative to it.
Cost model parameters
| Parameter | Default | Description |
|---|---|---|
seq_page_cost | 1.0 | Cost of reading one page sequentially. This is the baseline; all other parameters are relative to it. |
random_page_cost | 4.0 | Cost of reading one page at a random location. Lower this value (for example, to 1.1) if your data is entirely cached in RAM, since random access is no longer penalized. |
cpu_tuple_cost | 0.01 | Cost of processing one row. |
cpu_index_tuple_cost | 0.005 | Cost of processing one index entry. |
cpu_operator_cost | 0.0025 | Cost of evaluating one operator or function. |
Reducing random_page_cost relative to seq_page_cost makes index scans look cheaper and causes the planner to prefer them. Raising random_page_cost has the opposite effect.
Reading EXPLAIN output
Every node in an EXPLAIN plan reports costs in this format:
(cost=<startup>..<total> rows=<n> width=<w>)| Field | Meaning |
|---|---|
startup | Estimated cost before the first row is returned (for example, sorting overhead). |
total | Estimated cost to return all rows, assuming the node runs to completion. |
rows | Estimated number of rows returned. |
width | Estimated average row width in bytes. |
The sections below show how startup and total are derived for each scan type.
Full table scan
A full table scan reads every page sequentially — this is reported as Seq Scan in EXPLAIN output.
postgres=# EXPLAIN SELECT * FROM test WHERE id < 1000;
QUERY PLAN
----------------------------------------------------------
Seq Scan on test (cost=0.00..1693.00 rows=5680 width=4)
Filter: (id < 1000)
(2 rows)To verify cost=0.00..1693.00, first look up the table statistics:
postgres=# SELECT relpages, reltuples FROM pg_class WHERE relname = 'test';
relpages | reltuples
----------+-----------
443 | 100000
(1 row)The total cost is the sum of I/O cost and CPU cost:
I/O cost =
seq_page_cost×relpages= 1.0 × 443 = 443CPU cost = (
cpu_tuple_cost+cpu_operator_cost) ×reltuples= (0.01 + 0.0025) × 100,000 = 1,250TOTAL_COST = 443 + 1,250 = 1,693
This matches the ..1693.00 in the EXPLAIN output. The startup cost is 0.00 because a sequential scan returns rows immediately.
Index scan
An index scan looks up matching index entries and fetches the corresponding table rows. This is reported as Index Only Scan (or Index Scan) in EXPLAIN output.
postgres=# EXPLAIN SELECT * FROM test WHERE id = 1414;
QUERY PLAN
-------------------------------------------------------------------------------
Index Only Scan using test_id_idx on test (cost=0.29..72.66 rows=30 width=4)
Index Cond: (id = 1414)
(2 rows)The total cost is:
TOTAL_COST = START_COST + RUN_COST
RUN_COST = CPU cost + I/O costCalculate the CPU cost
CPU cost has two components — one for traversing the index and one for fetching rows from the table:
CPU cost = (index scan CPU cost) + (table scan CPU cost)
= Selectivity × index entries × (cpu_index_tuple_cost + cpu_operator_cost)
+ Selectivity × table rows × cpu_tuple_costGet the selectivity for value 1414 from pg_stats:
postgres=# SELECT most_common_freqs[array_position((most_common_vals::text)::real[], 1414::real)]
FROM pg_stats WHERE tablename = 'test';
most_common_freqs
-------------------
0.0003
(1 row)Get the index statistics:
postgres=# SELECT relpages, reltuples FROM pg_class WHERE relname = 'test_id_idx';
relpages | reltuples
----------+-----------
276 | 100000
(1 row)Substituting the values:
Index scan CPU cost = 0.0003 × 100,000 × (0.005 + 0.0025) = 0.225
Table scan CPU cost = 0.0003 × 100,000 × 0.01 = 0.300
CPU cost = 0.225 + 0.300 = 0.525
Calculate the I/O cost
I/O cost also has two components — one for reading the index pages and one for fetching the table pages.
Index I/O cost:
I/O cost of the index scan = ceil(Selectivity × index pages) × random_page_cost
= ceil(0.0003 × 276) × 4.0
= ceil(0.0828) × 4.0
= 1 × 4.0 = 4.0Table I/O cost depends on how correlated the physical row order is with the index order. Get the correlation from pg_stats:
postgres=# SELECT correlation FROM pg_stats WHERE tablename = 'test';
correlation
-------------
0.670535
(1 row)The table I/O cost is interpolated between a worst case and a best case:
max_io_cost = Selectivity × table rows × random_page_cost
= 0.0003 × 100,000 × 4.0 = 120
min_io_cost = 1 × random_page_cost + (ceil(Selectivity × table pages) − 1) × seq_page_cost
= 1 × 4.0 + (ceil(0.0003 × 443) − 1) × 1.0
= 4.0 + 0 = 4.0
I/O cost of the table scan = max_io_cost + correlation² × (min_io_cost − max_io_cost)
= 120 + 0.670535² × (4.0 − 120)
= 120 + 0.449617 × (−116)
= 120 − 52.156 = 67.844Total I/O cost:
I/O cost = index I/O cost + table I/O cost = 4.0 + 67.844 = 71.844Verify the result
Combining CPU cost and I/O cost:
RUN_COST = 71.844 + 0.525 = 72.369
START_COST = 0.29 (cost of descending the B-tree index)
TOTAL_COST = 72.369 + 0.29 = 72.659 ≈ 72.66
This matches cost=0.29..72.66 in the EXPLAIN output.