Point-in-time restoration (PITR) restores all data from a PolarDB cluster to a new cluster at any point within the last 7 days. Use this method to recover from data corruption or accidental deletion. After you verify the restored data, migrate it back to the source cluster.
Full restoration supports two methods: restoring from a backup set or restoring to a point in time. This topic covers point-in-time restoration only.
Usage notes
The restored cluster inherits the data and account information from the source cluster, but not the parameter settings.
Prerequisites
Before you begin, ensure that you have:
A PolarDB cluster with backup enabled
(Optional) Cross-region backup enabled, if you want to restore data to a different region
Restore a cluster to a point in time
Log on to the PolarDB console. In the left navigation pane, click Clusters. Select the region where your cluster is located, then click the cluster ID.
In the left navigation pane, choose Settings and Management > Backup and Restoration.
Start point-in-time restoration:
Same-region restoration: On the Backup and Restoration page, click Point-in-time Restoration.

Cross-region restoration:

On the Backup and Restoration page, select the region that contains the backup data.
Click Point-in-time Restoration.
On the Clone Instance page, select a Product Type for the new cluster.
Subscription: Pay for compute nodes upfront. Storage is billed hourly based on actual data volume.
Pay-as-you-go: No upfront payment. Both compute and storage are billed hourly. Select this option if you need the cluster temporarily — release it when done to reduce costs.
Configure the following parameters, then complete payment.
Parameter Description Operation type Select Restore to Point in Time. Region Select the destination region. If cross-region backup is enabled, choose the source or destination region. If not enabled, this defaults to the source cluster's region. Backup point in time Select the point in time to restore to. Any time within the last 7 days is supported. Primary zone Select the primary zone. In regions with two or more zones, PolarDB automatically replicates data to the secondary zone for disaster recovery. Network type Fixed to VPC. No action needed. VPC network and vSwitch Select the Virtual Private Cloud (VPC) and vSwitch. Use the same VPC and vSwitch as the source cluster to enable internal network communication with connected ECS instances. Compatibility Auto-set to match the source cluster (for example, PostgreSQL 14). No action needed. Product version Auto-set to match the source cluster's product edition. No action needed. Series Auto-set to match the source cluster's edition. No action needed. Subseries Select a specification type: Dedicated (exclusively allocated compute resources, better performance stability) or General-purpose (shared idle resources across clusters, better cost-effectiveness). CPU architecture Auto-set to match the source cluster. No action needed. Node specifications Select node specifications. To ensure the restored cluster runs properly, select specifications that are higher than those of the source cluster. See Product specifications. Number of nodes Defaults to 2 (one primary node and one secondary node). After creation, add up to 15 additional secondary nodes. See Add or remove nodes. Database proxy type Fixed to Dedicated Enterprise Edition. No action needed. Enable hot standby cluster Enabled: Deploys a primary cluster and a hot standby storage cluster in the same region, with 6 total data replicas (3 per cluster) and a higher SLA. Disabled: Only the primary cluster is deployed with 3 data replicas. Storage unit price is half that of the enabled option, but the SLA is lower. Storage type Enterprise Edition — choose PSL5 (high performance, reliability, and availability) or PSL4 (uses smart-SSD technology to compress and decompress data at the physical SSD level, lowering storage costs while keeping performance impact manageable). You cannot change the storage type after cluster creation. Standard Edition — choose from five ESSD types: PL0 ESSD (performance level 0); PL1 ESSD (5× IOPS and ~2× throughput vs PL0); PL2 ESSD (~2× IOPS and throughput vs PL1); PL3 ESSD (up to 10× IOPS and 5× throughput vs PL2, suitable for high concurrent I/O workloads); or ESSD AutoPL disk (decouples IOPS from capacity for flexible configuration and lower TCO). For a comparison, see How to choose between PSL4 and PSL5 and ESSD cloud disks. ImportantWhen a cloud disk reaches full capacity, it becomes read-only.
Storage billing type Pay-by-capacity (Pay-as-you-go): Serverless. Storage scales automatically; you are charged only for actual data volume. Pay-by-space (Subscription): Prepay for a fixed amount of storage. Available only when Product Type is Subscription. See Storage billing methods. Storage space The prepaid storage capacity (50 GB–500 TB for Enterprise Edition; minimum increment: 10 GB). Available only when Product Type is Subscription and Storage billing type is Pay-by-space (Subscription). Cluster name Enter a name (2–128 characters, starting with a letter or a Chinese character; allows digits, periods, underscores, and hyphens). Leave blank to auto-generate a name. Subscription duration Select the subscription term. Available only when Product Type is Subscription. Quantity Select the number of clusters to create. Complete payment based on your selected Product Type:
Pay-as-you-go: Click Buy Now.
Subscription: Click Buy Now, then confirm the order and payment method on the Payment page, and click Pay.
Cluster creation takes 10–15 minutes. After creation, find the new cluster in the Clusters list.
How long restoration takes
Total restoration time = time to restore from backup set (snapshot) + time to apply physical logs.
If significant time has passed since the last backup and write payload was high, applying physical logs takes longer. For more information, see Redo log backup.
API reference
| API | Description |
|---|---|
| CreateDBCluster | Restore data to a PolarDB cluster. Set CreationOption to CloneFromPolarDB. |