In multi-tenant deployments, a single high-traffic tenant can monopolize the CPU, memory, and I/O resources of a data node (DN), degrading query performance for every other tenant on that node. PolarDB for PostgreSQL Distributed Edition lets you migrate a hot spot tenant's data to a dedicated node online, without blocking reads or writes, so all tenants get predictable performance.
How it works
By default, PolarDB for PostgreSQL Distributed Edition spreads tenant data evenly across all DNs in the cluster so that nodes share compute and storage. When one tenant grows disproportionately large, that shared model creates three problems:
Resource contention: The hot spot tenant's queries consume excessive CPU, memory, and I/O, causing a significant drop in query performance for co-located tenants.
Performance fluctuation: Burst traffic from the hot spot tenant spikes query latency for other tenants.
Storage imbalance: The large data volume of the hot spot tenant causes uneven storage usage across nodes, increasing management complexity.
Tenant isolation resolves these problems by physically moving the hot spot tenant's data to a dedicated node with sufficient resources. The migration runs online and does not block DML or DQL operations.
Isolation works differently depending on how your tenants are partitioned:
| Partitioning model | Isolation approach |
|---|---|
Horizontal splitting (shards by tenant_id) | Create a dedicated shard for the tenant, then migrate that shard to the target node |
| Vertical partitioning (independent schemas) | Migrate the entire schema to the target node |
Before you begin
Migration increases CPU, I/O, and network load on both the source and destination nodes. Run migrations during off-peak hours.
Confirm that the destination node has enough storage space for the tenant's data before migrating.
Horizontal splitting (based on shards)
For distributed tables partitioned by a tenant_id column, isolation is a two-step process:
Create a dedicated shard for the hot spot tenant using
isolate_tenant_to_new_shard. After this step, the new shard exists but still resides on the original node.Move the shard to the destination node using
polar_cluster_move_shard_placement.
Step 1: Create a dedicated shard
isolate_tenant_to_new_shard creates a new, separate shard for the data of a specific tenant.
Syntax
SELECT isolate_tenant_to_new_shard('<distributed_table_name>', '<tenant_ID>', 'CASCADE');Example
Create and populate a distributed table:
CREATE TABLE orders ( order_id BIGSERIAL, store_id INT NOT NULL, -- Distribution key (tenant ID) product_name VARCHAR(255), amount DECIMAL(10, 2), order_time TIMESTAMPTZ DEFAULT NOW(), PRIMARY KEY (order_id, store_id) -- Distribution key must be part of the primary key ); -- Distribute the table by store_id SELECT create_distributed_table('orders', 'store_id'); -- Insert sample data for two tenants INSERT INTO orders (store_id, product_name, amount) SELECT 101, 'Laptop - ' || i, 4999.00 + i FROM generate_series(1, 5) i; INSERT INTO orders (store_id, product_name, amount) SELECT 102, 'Mechanical Keyboard - ' || i, 899.00 + i FROM generate_series(1, 5) i;Create a dedicated shard for tenant
102:SELECT isolate_tenant_to_new_shard('orders', '102', 'CASCADE');The function returns the ID of the new shard:
isolate_tenant_to_new_shard ----------------------------- 102108Shard
102108is now created but still on the original node. Proceed to Step 2 to move it.
Step 2: Move the shard to the destination node
polar_cluster_move_shard_placement migrates a shard from a source node to a destination node.
Syntax
SELECT polar_cluster_move_shard_placement(
<shard_ID>,
'<source_node_IP>',
<source_node_port>,
'<destination_node_IP>',
<destination_node_port>
);Choose a migration strategy based on your goal:
Migrate the hot spot tenant's shard
Move only the hot spot tenant's dedicated shard to the destination node, leaving all other tenants on the original node.
Find the current location of the shard:
-- Replace 102108 with the shard ID returned in Step 1 SELECT nodename, nodeport FROM pg_dist_shard_placement WHERE shardid = 102108;Output:
nodename | nodeport ----------+---------- 10.0.0.1 | 5432Move the shard to the destination node:
SELECT polar_cluster_move_shard_placement(102108, '10.0.0.1', 5432, '10.0.0.2', 5432);
Migrate non-hot spot tenant shards off the node
To dedicate an entire node to the hot spot tenant, migrate all other shards off that node instead.
List the shards on the hot spot node that belong to other tenants:
-- Replace 102108 with the hot spot shard ID, and update the nodename and nodeport accordingly SELECT shardid, shard_size FROM polar_cluster_shards WHERE table_name::text = 't' AND shardid <> 102108 AND nodename = '10.0.0.1' AND nodeport = 5432 ORDER BY shard_size ASC, shardid ASC;Output:
shardid | shard_size ---------+------------ 102104 | 8192 102105 | 16384 102106 | 83820544 102107 | 256344064Move each shard to the destination node:
SELECT polar_cluster_move_shard_placement(102104, '10.0.0.1', 5432, '10.0.0.2', 5432); SELECT polar_cluster_move_shard_placement(102105, '10.0.0.1', 5432, '10.0.0.2', 5432); -- Continue for each shard in the list
Vertical partitioning (based on schemas)
When each tenant already has its own schema, isolation is a single step: move the entire schema to the destination node.
Syntax
SELECT polar_cluster_schema_move(
'<schema_name>',
'<destination_node_IP>',
<destination_node_port>
);Example
Create and distribute a schema:
CREATE SCHEMA company1; SELECT polar_cluster_schema_distribute('company1'); CREATE TABLE company1.users (id SERIAL, name TEXT);Confirm that queries for this tenant route to a single node:
EXPLAIN SELECT * FROM company1.users WHERE id = 1;Output:
QUERY PLAN -------------------------------------------------------------------------------- Custom Scan (PolarCluster Adaptive) (cost=0.00..0.00 rows=0 width=0) Task Count: 1 Tasks Shown: All -> Task Node: host=10.0.0.1 port=5432 dbname=postgres -> Seq Scan on users_102010 users (cost=0.00..25.88 rows=6 width=36) Filter: (id = 1)The query plan shows
Task Count: 1, confirming that all queries forcompany1hit only10.0.0.1:5432.Migrate the schema to the destination node:
SELECT polar_cluster_schema_move('company1', '10.0.0.2', 5432);
What's next
Multi-tenant management — learn how PolarDB for PostgreSQL Distributed Edition distributes tenant data across nodes, and how to set up horizontal splitting and vertical partitioning.