PolarDB supports multiple policies to control the degree of parallelism (DOP) globally or per query, and to fall back to sequential execution gracefully when resource loads are high.
All cluster parameters in the PolarDB console include the MySQL configuration file compatibility prefix loose_. When modifying parameters in the console, use the parameter names that include the loose_ prefix.
How it works
When a parallel query arrives, PolarDB applies three layers of control:
Queuing policy — If the total number of running parallel workers reaches
loose_max_parallel_workers, new parallel queries enter a first-in, first-out (FIFO) queue instead of running immediately.Queue capacity limit — If the total DOP of queued queries reaches
loose_queuing_parallel_degree_limit, the queue is considered full and subsequent queries fall back to sequential execution.Queue timeout — If a query waits in the queue longer than
loose_pq_max_queuing_time, it is removed from the queue and falls back to sequential execution.
Whether PolarDB attempts parallel execution at all depends on the DOP policy set by loose_parallel_degree_policy.
DOP parameters
All parameters listed below require the loose_ prefix when configured in the PolarDB console.
Queuing and worker limits
| Parameter | Scope | Default | Description |
|---|---|---|---|
loose_max_parallel_workers | Global | 2 × CPU cores | Maximum number of parallel workers across all concurrent parallel queries on a single node. When this limit is reached, new parallel queries enter a FIFO queue. In a serverless cluster, this value is adjusted automatically as node specifications scale. Valid values: 1–10,000. |
loose_queuing_parallel_degree_limit | Global | 64 | Maximum total DOP for queries waiting in the queue. When this limit is reached, the queue is full and additional parallel queries fall back to sequential execution. Valid values: 0–10,000. |
loose_pq_max_queuing_time | Global, session | 200 ms | Maximum time a query can wait in the queue before falling back to sequential execution. Valid values: 0–18,446,744,073,709,551,615 ms. |
DOP policy
| Parameter | Scope | Default | Description |
|---|---|---|---|
loose_parallel_degree_policy | Global | REPLICA_AUTO | Controls how PolarDB selects the DOP for each query. Valid values: TYPICAL, AUTO, REPLICA_AUTO. |
Policy comparison:
| Value | Who runs parallel queries | How DOP is selected |
|---|---|---|
TYPICAL | Primary node and read-only nodes | Fixed: uses the value of loose_max_parallel_degree, regardless of current CPU usage |
AUTO | Primary node and read-only nodes | Adaptive: PolarDB enables or disables parallel execution based on CPU usage, memory usage, and input/output operations per second (IOPS), and selects DOP based on query cost |
REPLICA_AUTO (default) | Read-only nodes only | Adaptive: same as AUTO, but the primary node always uses sequential execution |
Use REPLICA_AUTO when you want to protect the primary node from parallel query overhead while allowing read-only nodes to scale query performance adaptively.
Resource-based thresholds
These parameters apply when loose_parallel_degree_policy is set to AUTO or REPLICA_AUTO. PolarDB disables parallel queries on a node when any threshold is exceeded.
| Parameter | Scope | Default | Threshold behavior |
|---|---|---|---|
loose_auto_dop_cpu_pct_hwm | Global | 70 | Disables parallel queries when CPU usage exceeds this percentage. Valid values: 0–100. |
loose_auto_dop_mem_pct_hwm | Global | 90 | Disables parallel queries when memory usage exceeds this percentage. Valid values: 0–100. |
loose_auto_dop_iops_pct_hwm | Global | 80 | Disables parallel queries when IOPS usage exceeds this percentage. Valid values: 0–100. |
Monitor parallel query execution
Use the following status variables to track how often parallel queries fall back to sequential execution.
| Variable | Scope | Description |
|---|---|---|
PQ_refused_over_total_workers | Global, session | Number of queries that fell back to sequential execution because loose_max_parallel_workers was reached. A rising count indicates the worker limit may be too low for your workload. Consider increasing loose_max_parallel_workers. |
PQ_refused_over_max_queuing_time | Global, session | Number of queries that fell back to sequential execution due to queuing timeout. A rising count indicates queries are waiting too long — consider increasing loose_max_parallel_workers or loose_queuing_parallel_degree_limit, or increasing loose_pq_max_queuing_time. |
Total_running_parallel_workers | Global | Current number of active parallel workers. Use this to gauge how close you are to the loose_max_parallel_workers limit. |