All Products
Search
Document Center

AnalyticDB:Performance parameter tuning

Last Updated:Mar 30, 2026

AnalyticDB for PostgreSQL works out of the box for most use cases. For scenarios with specific performance requirements — high connection concurrency, memory-intensive queries, or mixed-priority workloads — tuning database parameters can significantly improve throughput and resource utilization.

Parameter levels

Parameters apply at four levels. A parameter set at a lower level overrides the same parameter at a higher level for that scope.

Level Scope How to set
System All users across all databases in an instance Submit a ticket to operations and maintenance (O&M) personnel
Database All sessions in the specified database ALTER DATABASE name SET parameter { TO | = } { value | DEFAULT };
Role All sessions for the specified user ALTER ROLE name SET parameter { TO | = } { value | DEFAULT };
Session The current session only SET parameter { TO | = } { value | DEFAULT };
Start with session-level tuning. Modify higher-level parameters only after confirming the change does not affect other users or workloads.

Check the current value of a parameter

Before changing a parameter, check its current value:

-- Check a specific parameter in the current session
SHOW statement_mem;

-- Check all parameters for the current database
SELECT name, setting, unit, context FROM pg_settings WHERE name = 'statement_mem';

System-level parameters

The following parameters require a ticket to modify. For descriptions, see the Greenplum configuration parameter reference.

Parameter Default Type/Unit Valid values
gp_autostats_mode ON_NO_STATS STRING NONE, ON_CHANGE, ON_NO_STATS
gp_autostats_mode_in_functions NONE STRING NONE, ON_CHANGE, ON_NO_STATS
gp_max_slices 50 INT 0–1000
log_rotation_size 102400 KB 0–4194304
master.rds_enable_vmem_protect on STRING on, off
master.rds_max_non_super_conns 500 INT 10–1000
max_stack_depth 2048 KB 100–2048000
max_statement_mem 2048000 KB 32768–2147483647
optimizer on STRING on, off
random_page_cost 4 DOUBLE 0–1000
rds.rds_enable_aliyun_oss_endpoint on STRING on, off
rds.rds_enable_oss_endpoint_whitelist_check on STRING on, off
rds_max_super_conns 50 INT 10–100
segment.rds_enable_vmem_proctect off STRING on, off
segment.rds_max_non_super_conns 1000 INT 10–3000
statement_mem 2047000 KB 50–2147483647
statement_timeout 10800000 INT 0–2147483647

Connection parameters

These parameters control how many concurrent connections the instance accepts. Misconfiguration can cause connection errors in high-concurrency scenarios.

Parameter Takes effect Dependency Description
max_connections Instance restart Compute node value must be 3–5x the coordinator node value Maximum concurrent connections per node. Rather than raising max_connections directly, adjust rds_max_non_super_conns instead — it takes effect without a restart. For high-concurrency short-lived connections, set rds_max_non_super_conns to a value between 500 and the max_connections value.
max_prepared_transactions Instance restart Set to the same value as max_connections. 2 × max_prepared_transactions must exceed rds_max_non_super_conns + rds_max_super_conns. Maximum number of transactions in the two-phase commit (2PC) state. If errors occur under high concurrency, increase this value.
rds_max_non_super_conns Reload configuration Must be less than max_connections. rds_max_non_super_conns + rds_max_super_conns must not exceed max_connections. Compute node value must be 3–5x the coordinator node value. Maximum connections for non-superusers. Adjusting this parameter is the preferred way to manage connection limits.
rds_max_super_conns Reload configuration Same constraints as rds_max_non_super_conns. Maximum connections for superusers. No adjustment is needed in most cases.

When you hit connection limits: Increase rds_max_non_super_conns (takes effect immediately via reload). If the limit is already at max_connections, increase max_connections first (requires restart), then raise rds_max_non_super_conns.

Memory parameters

These parameters control how memory is allocated to queries and shared across sessions. Tune them when queries fail due to out-of-memory (OOM) errors or when memory is underutilized.

Parameter Unit Takes effect Dependency Description
shared_buffers KB Instance restart For row-oriented table workloads, set to 25% of instance memory or higher on compute nodes. Size of the shared buffer pool. Set separately for coordinator and compute nodes. The coordinator node cache holds data dictionary tables; compute nodes cache row-oriented tables. Append-optimized column-oriented (AOCO) tables are not affected by this setting. If the instance has more than 100,000 tables, increase this value on both coordinator and compute nodes.
temp_buffers KB Reload configuration Size of the temporary buffer used for session-level access to temporary tables. Default: 8 MB. Increase this at the session level to improve performance on large temporary tables.
gp_vmem_protect_limit MB Instance restart gp_vmem_protect_limit = gp_vmem / acting_primary_segments Maximum memory all postgres processes on a compute node can use. A query fails if it requires more memory than this limit. Set separately for primary and secondary compute nodes. Calculate the appropriate value: gp_vmem = ((SWAP + RAM) - (7.5 GB + 0.05 × RAM)) / 1.7 then gp_vmem_protect_limit = gp_vmem / acting_primary_segments. If queries fail due to memory errors, increase this value. To set per-query memory limits, use resource queue parameters instead — see Use resource queues for workload management.
gp_vmem_protect_segworker_cache_limit KB Instance restart Maximum memory cached per query executor process on compute nodes. If the instance has many connections or idle processes, decrease this value to free memory for active queries. Set separately for each compute node.
gp_resqueue_memory_policy Reload configuration Memory management policy. eager_free releases memory at the end of each query plan stage, making it available to the next stage — reduces OOM risk but adds overhead. auto distributes memory based on statement_mem and the resource queue memory limit.
statement_mem KB Reload configuration Active when gp_resqueue_memory_policy is set to auto. Cannot exceed max_statement_mem. Memory allocated per query on the host. If a query needs more memory than this limit, it spills to temporary disk files. Calculate the appropriate value: statement_mem = (gp_vmem_protect_limit GB × 0.9) / max_expected_concurrent_queries. For example, with gp_vmem_protect_limit = 8 GB and 40 expected concurrent queries: (8 GB × 0.9) / 40 = 184 MB.
max_statement_mem KB Reload configuration (seghost_physical_memory) / (average_number_concurrent_queries) Upper bound on per-query memory allocation. Prevents OOM errors on compute nodes when statement_mem is set too high. Calculate the appropriate value: max_statement_mem = seghost_physical_memory / average_number_concurrent_queries.

Workload parameters

These parameters control CPU allocation and query prioritization across concurrent workloads.

Parameter Valid values Takes effect Dependency Description
gp_resqueue_priority BOOLEAN Instance restart Enables or disables query prioritization. When disabled, priority settings on resource queues have no effect at runtime. Disable prioritization in latency-sensitive scenarios where all queries should compete equally for CPU.
gp_resqueue_priority_cpucores_per_segment 0.1–512.0 Instance restart Number of CPU cores allocated to each compute node. Set this to match the actual CPU count of the instance when it was created. An incorrect value causes CPU underutilization and makes query prioritization unreliable.
gp_resqueue_priority_sweeper_interval 500–15000 ms Instance restart gp_resqueue_priority must be enabled Interval at which CPU utilization is recalculated for active queries. A shorter interval improves priority enforcement accuracy but increases CPU overhead.

Trade-off for gp_resqueue_priority: Disabling query prioritization removes scheduling overhead and can improve throughput for uniform workloads. If your workload mixes latency-sensitive and batch queries, keep prioritization enabled and assign priorities through resource queues. For more information, see Use resource queues for workload management.

What's next