All Products
Search
Document Center

PolarDB:Configure parallel resource control policies

Last Updated:Mar 28, 2026

PolarDB supports multiple policies to control the degree of parallelism (DOP) globally or per query, and to fall back to sequential execution gracefully when resource loads are high.

Note

All cluster parameters in the PolarDB console include the MySQL configuration file compatibility prefix loose_. When modifying parameters in the console, use the parameter names that include the loose_ prefix.

How it works

When a parallel query arrives, PolarDB applies three layers of control:

  1. Queuing policy — If the total number of running parallel workers reaches loose_max_parallel_workers, new parallel queries enter a first-in, first-out (FIFO) queue instead of running immediately.

  2. Queue capacity limit — If the total DOP of queued queries reaches loose_queuing_parallel_degree_limit, the queue is considered full and subsequent queries fall back to sequential execution.

  3. Queue timeout — If a query waits in the queue longer than loose_pq_max_queuing_time, it is removed from the queue and falls back to sequential execution.

Whether PolarDB attempts parallel execution at all depends on the DOP policy set by loose_parallel_degree_policy.

DOP parameters

Note

All parameters listed below require the loose_ prefix when configured in the PolarDB console.

Queuing and worker limits

ParameterScopeDefaultDescription
loose_max_parallel_workersGlobal2 × CPU coresMaximum number of parallel workers across all concurrent parallel queries on a single node. When this limit is reached, new parallel queries enter a FIFO queue. In a serverless cluster, this value is adjusted automatically as node specifications scale. Valid values: 1–10,000.
loose_queuing_parallel_degree_limitGlobal64Maximum total DOP for queries waiting in the queue. When this limit is reached, the queue is full and additional parallel queries fall back to sequential execution. Valid values: 0–10,000.
loose_pq_max_queuing_timeGlobal, session200 msMaximum time a query can wait in the queue before falling back to sequential execution. Valid values: 0–18,446,744,073,709,551,615 ms.

DOP policy

ParameterScopeDefaultDescription
loose_parallel_degree_policyGlobalREPLICA_AUTOControls how PolarDB selects the DOP for each query. Valid values: TYPICAL, AUTO, REPLICA_AUTO.

Policy comparison:

ValueWho runs parallel queriesHow DOP is selected
TYPICALPrimary node and read-only nodesFixed: uses the value of loose_max_parallel_degree, regardless of current CPU usage
AUTOPrimary node and read-only nodesAdaptive: PolarDB enables or disables parallel execution based on CPU usage, memory usage, and input/output operations per second (IOPS), and selects DOP based on query cost
REPLICA_AUTO (default)Read-only nodes onlyAdaptive: same as AUTO, but the primary node always uses sequential execution

Use REPLICA_AUTO when you want to protect the primary node from parallel query overhead while allowing read-only nodes to scale query performance adaptively.

Resource-based thresholds

These parameters apply when loose_parallel_degree_policy is set to AUTO or REPLICA_AUTO. PolarDB disables parallel queries on a node when any threshold is exceeded.

ParameterScopeDefaultThreshold behavior
loose_auto_dop_cpu_pct_hwmGlobal70Disables parallel queries when CPU usage exceeds this percentage. Valid values: 0–100.
loose_auto_dop_mem_pct_hwmGlobal90Disables parallel queries when memory usage exceeds this percentage. Valid values: 0–100.
loose_auto_dop_iops_pct_hwmGlobal80Disables parallel queries when IOPS usage exceeds this percentage. Valid values: 0–100.

Monitor parallel query execution

Use the following status variables to track how often parallel queries fall back to sequential execution.

VariableScopeDescription
PQ_refused_over_total_workersGlobal, sessionNumber of queries that fell back to sequential execution because loose_max_parallel_workers was reached. A rising count indicates the worker limit may be too low for your workload. Consider increasing loose_max_parallel_workers.
PQ_refused_over_max_queuing_timeGlobal, sessionNumber of queries that fell back to sequential execution due to queuing timeout. A rising count indicates queries are waiting too long — consider increasing loose_max_parallel_workers or loose_queuing_parallel_degree_limit, or increasing loose_pq_max_queuing_time.
Total_running_parallel_workersGlobalCurrent number of active parallel workers. Use this to gauge how close you are to the loose_max_parallel_workers limit.