PolarDB for PostgreSQL Enterprise Edition offers two compute node specification families: Dedicated and General-purpose. Use this page to select the specification that matches your vCPU, memory, IOPS, and connection requirements.
Specification families
| Specification family | Resource model | Best for |
|---|---|---|
| Dedicated | Each cluster exclusively uses its allocated CPUs — no sharing with other clusters on the same host | Production workloads that require stable, predictable performance |
| General-purpose | Idle CPUs are shared among clusters on the same host | Cost-sensitive workloads or dev/test environments where occasional CPU contention is acceptable |
Quick comparison
| Dedicated | General-purpose | |
|---|---|---|
| vCPU range | 2–120 vCPUs | 2–16 vCPUs |
| Memory-to-vCPU ratio | 4 GB or 8 GB per vCPU | 2 GB, 4 GB, or 8 GB per vCPU |
| Maximum storage | Up to 500 TB | Up to 100 TB |
| CPU isolation | Guaranteed | Shared (best-effort) |
General-purpose nodes share CPU resources. Under sustained high CPU load, performance may vary because idle capacity from neighboring clusters is not guaranteed. If your workload consistently runs near peak CPU, choose Dedicated instead.
Terminology
| Term | Definition |
|---|---|
| PSL4 / PSL5 | PolarDB storage performance levels. PSL5 provides higher IOPS limits than PSL4 for the same compute node. Choose your storage class alongside your compute node to ensure the IOPS limit meets your workload requirements. |
| Internal bandwidth | Network bandwidth between the compute node and the storage layer within the cluster. |
| I/O bandwidth | Maximum data transfer rate between the compute node and storage. |
| Maximum connections | The upper limit for concurrent connections, controlled by the max_connections parameter. The actual number of connections a node can sustain depends on per-connection memory usage, which varies by workload. |
Compute node specifications
Dedicated
| Node specifications | vCPUs and memory | Maximum storage capacity | Maximum connections¹ | Internal bandwidth | Maximum IOPS for PSL4 | Maximum IOPS for PSL5 | I/O bandwidth |
|---|---|---|---|---|---|---|---|
| polar.pg.x4.medium | 2 vCPUs, 8 GB | 50 TB | 800 | 1 Gbps | 8,000 | 16,000 | 1 Gbps |
| polar.pg.x8.medium | 2 vCPUs, 16 GB | 100 TB | 1,600 | 5 Gbps | 8,000 | 16,000 | 1 Gbps |
| polar.pg.x4.large | 4 vCPUs, 16 GB | 100 TB | 1,600 | 10 Gbps | 32,000 | 64,000 | 4 Gbps |
| polar.pg.x8.large | 4 vCPUs, 32 GB | 100 TB | 3,200 | 10 Gbps | 32,000 | 64,000 | 4 Gbps |
| polar.pg.x4.xlarge | 8 vCPUs, 32 GB | 100 TB | 3,200 | 10 Gbps | 50,000 | 128,000 | 8 Gbps |
| polar.pg.x8.xlarge | 8 vCPUs, 64 GB | 100 TB | 3,200 | 10 Gbps | 50,000 | 160,000 | 10 Gbps |
| polar.pg.x4.2xlarge | 16 vCPUs, 64 GB | 100 TB | 3,200 | 10 Gbps | 64,000 | 256,000 | 16 Gbps |
| polar.pg.x8.2xlarge | 16 vCPUs, 128 GB | 100 TB | 12,800 | 10 Gbps | 64,000 | 256,000 | 16 Gbps |
| polar.pg.x4.4xlarge | 32 vCPUs, 128 GB | 100 TB | 12,800 | 10 Gbps | 80,000 | 256,000 | 16 Gbps |
| polar.pg.x8.4xlarge | 32 vCPUs, 256 GB | 300 TB | 25,600 | 10 Gbps | 80,000 | 384,000 | 24 Gbps |
| polar.pg.x4.6xlarge | 48 vCPUs, 192 GB | 100 TB | 12,800 | 10 Gbps | 100,000 | 256,000 | 16 Gbps |
| polar.pg.x8.6xlarge | 48 vCPUs, 384 GB | 300 TB | 25,600 | 10 Gbps | 100,000 | 384,000 | 24 Gbps |
| polar.pg.x4.8xlarge | 64 vCPUs, 256 GB | 300 TB | 25,600 | 10 Gbps | 120,000 | 384,000 | 24 Gbps |
| polar.pg.x8.8xlarge | 64 vCPUs, 512 GB | 500 TB | 36,000 | 10 Gbps | 120,800 | 409,600 | 24 Gbps |
| polar.pg.x8.12xlarge | 88 vCPUs, 710 GB | 500 TB | 36,000 | 25 Gbps | 150,000 | 512,000 | 32 Gbps |
| polar.pg.x8.15xlarge | 120 vCPUs, 920 GB | 500 TB | 36,000 | 25 Gbps | 150,000 | 512,000 | 32 Gbps |
General-purpose
| Node specifications | vCPUs and memory | Maximum storage capacity | Maximum connections¹ | Internal bandwidth | Maximum IOPS for PSL4 | Maximum IOPS for PSL5 | I/O bandwidth |
|---|---|---|---|---|---|---|---|
| polar.pg.g2.medium | 2 vCPUs, 4 GB | 20 TB | 500 | 1 Gbps | 5,000 | 10,000 | 1 Gbps |
| polar.pg.g4.medium | 2 vCPUs, 8 GB | 50 TB | 800 | 1 Gbps | 5,000 | 16,000 | 1 Gbps |
| polar.pg.g2.large | 4 vCPUs, 8 GB | 50 TB | 1,000 | 10 Gbps | 16,000 | 32,000 | 2 Gbps |
| polar.pg.g4.large | 4 vCPUs, 16 GB | 100 TB | 1,600 | 10 Gbps | 16,000 | 64,000 | 4 Gbps |
| polar.pg.g2.xlarge | 8 vCPUs, 16 GB | 100 TB | 2,000 | 10 Gbps | 32,000 | 96,000 | 4 Gbps |
| polar.pg.g4.xlarge | 8 vCPUs, 32 GB | 100 TB | 3,200 | 10 Gbps | 32,000 | 128,000 | 8 Gbps |
| polar.pg.g8.xlarge | 8 vCPUs, 64 GB | 100 TB | 3,200 | 10 Gbps | 42,000 | 160,000 | 10 Gbps |
| polar.pg.g2.2xlarge | 16 vCPUs, 32 GB | 100 TB | 3,200 | 10 Gbps | 48,000 | 192,000 | 10 Gbps |
¹ Maximum connections reflects the max_connections parameter value. In minor engine version V1.1.7 (December 2020), the maximum connections for some specifications were adjusted to the values shown above. These adjusted values apply automatically to clusters created after this release. For existing clusters, change the node specifications to update the maximum connections.
IOPS and I/O bandwidth values apply per node. Each node in a cluster operates independently — adding nodes scales total cluster throughput linearly. For example, a cluster with four nodes (one read/write node and three read-only nodes) using Dedicated polar.pg.x4.xlarge on PSL5 delivers a combined maximum of 4 × 128,000 IOPS and 4 × 8 Gbps.
Maximum connections
The maximum connections value is the upper limit for concurrent connections to a PolarDB for PostgreSQL cluster. When this limit is reached, new connections are rejected or time out.
The actual number of connections a cluster can sustain depends on the memory consumed per connection, which varies by application. To check and manage connections:
Query the configured limit:
SHOW max_connections;Query the current number of active connections:
SELECT count(1) FROM pg_stat_activity;
Keep the number of active connections below the recommended ceiling: LEAST({DBInstanceClassMemory/11MB}, 5000).
If your application needs more connections than the current limit allows, switch to a node specification with more memory.