All Products
Search
Document Center

PolarDB:Overview of upgrading an ApsaraDB RDS for PostgreSQL instance to a PolarDB for PostgreSQL cluster

Last Updated:Mar 28, 2026

PolarDB supports upgrading an ApsaraDB RDS for PostgreSQL instance to a PolarDB for PostgreSQL cluster using logical migration through Data Transmission Service (DTS). The destination cluster inherits the accounts, databases, and IP address whitelists of the source instance. Cross-version upgrades are supported — for example, you can upgrade an ApsaraDB RDS for PostgreSQL 11 instance to a PolarDB for PostgreSQL 14 cluster.

How the upgrade works

The upgrade uses DTS logical migration:

  1. DTS creates a data synchronization task that migrates the schema and full data from the source ApsaraDB RDS for PostgreSQL instance to the destination PolarDB for PostgreSQL cluster.

  2. DTS continuously synchronizes incremental data to keep the destination cluster in sync.

  3. When you are ready to switch over, the system switches traffic from the source instance to the destination cluster.

Benefits

BenefitDetail
No application changesThe switchover exchanges endpoints between the source instance and the destination cluster. Applications connect to PolarDB without updating connection settings.
No migration feesThe migration itself is free. You are charged only for the destination PolarDB cluster.
No data lossAll data is fully synchronized before the switchover.
Minimal downtimeIncremental data synchronization keeps service downtime under 10 minutes.
Hot migrationOnly one transient disconnection occurs — at the moment traffic switches from the source instance to the destination cluster.
Rollback supportIf a migration fails, roll it back within 10 minutes.

Prerequisites

Before starting an upgrade, make sure the following conditions are met:

  • The source ApsaraDB RDS for PostgreSQL instance version is the same as or earlier than the destination PolarDB for PostgreSQL cluster version. Downgrading is not supported. For example, you cannot upgrade an ApsaraDB RDS for PostgreSQL 14 instance to a PolarDB for PostgreSQL 11 cluster.

  • The source instance has no triggers. If a trigger exists, delete it and click Continue, or click Cancel and manually create a DTS data synchronization task. For more information, see Synchronize data from an ApsaraDB RDS for PostgreSQL instance to a PolarDB for PostgreSQL cluster.

  • SSL and transparent data encryption (TDE) are both disabled on the source instance endpoints. Instances with SSL or TDE enabled cannot be upgraded.

  • The source ApsaraDB RDS for PostgreSQL instance must also meet the following conditions. If a condition is not met, it is displayed on the PolarDB upgrade page:

    • Databases exist in the source ApsaraDB RDS for PostgreSQL instance.

    • Account names in the source instance use formats supported by PolarDB for PostgreSQL.

    • The AliyunServiceRoleForPolarDB service-linked role has been created.

    • The values of max_replication_slots and max_wal_senders each exceed the total number of two-way DTS links required, which equals the number of databases in the instance.

    • The instance has no more than 30 databases. Up to 30 two-way DTS links can be created.

    • The wal_level kernel parameter is set to logical.

Limitations

General limitations

  • Cross-region migration is not supported.

  • Source instance parameters cannot be changed during migration.

  • During migration, DTS automatically creates an account in the destination cluster with a name in the dts_clone% format and a randomly generated password. Do not change or delete this account until migration is complete.

Source instance requirements

Tables that you want to synchronize must have a PRIMARY KEY or UNIQUE constraint with all fields unique. Without this, the destination database may contain duplicate records.

If you select individual tables as the sync scope and need to rename them or their columns in the destination, a single task can sync up to 5,000 tables. For more than 5,000 tables, configure multiple tasks or sync the entire database instead.

Write-ahead logging (WAL) requirements:

Sync typeWAL retention requirement
Incremental sync onlyMore than 24 hours
Full + incremental syncAt least 7 days

After full data synchronization completes, you can reduce the retention period to more than 24 hours.

Do not modify the endpoints or zone of the source instance during synchronization. Doing so causes the synchronization task to fail.

Long-running transactions can cause WAL log accumulation in the source database, potentially exhausting disk space.

SQL statement support

DML: INSERT, UPDATE, and DELETE are supported.

DDL: The following DDL statements are supported if you use a privileged account on the source instance and the instance runs minor engine version 20210228 or later. For information about updating the minor engine version, see Update the minor engine version.

  • CREATE TABLE and DROP TABLE

  • ALTER TABLE (including RENAME TABLE, ADD COLUMN, ADD COLUMN DEFAULT, ALTER COLUMN TYPE, DROP COLUMN, ADD CONSTRAINT, ADD CONSTRAINT CHECK, and ALTER COLUMN DROP DEFAULT)

  • TRUNCATE TABLE (source instance runs PostgreSQL 11 or later)

  • CREATE INDEX ON TABLE

DDL statements not synced:

  • DDL statements containing CASCADE or RESTRICT

  • DDL statements from sessions that have run SET session_replication_role = replica

  • DDL statements committed in the same transaction as DML statements

  • DDL statements for objects outside the sync scope

Other limitations

  • Each data synchronization task syncs only one database. For multiple databases, create a separate task for each.

  • If you create a table or use RENAME during schema-level synchronization, run the following statement before writing data to that table. Run this during off-peak hours without locking the table to avoid a deadlock.

    ALTER TABLE schema.table REPLICA IDENTITY FULL;

    Replace schema and table with the actual schema name and table name.

  • DTS creates the following temporary tables in the source database. Do not delete them during synchronization — they are automatically removed after the DTS instance is released: public.dts_pg_class, public.dts_pg_attribute, public.dts_pg_type, public.dts_pg_enum, public.dts_postgres_heartbeat, public.dts_ddl_command, public.dts_args_session

  • DTS adds a heartbeat table named dts_postgres_heartbeat to the source database to maintain synchronization latency accuracy.

  • DTS creates a replication slot prefixed with dts_sync_ in the source database. Historical replication slots are automatically cleared every 120 minutes.

    DTS automatically deletes the replication slot when the instance is released. If you change the database password or delete the DTS IP address whitelist entries during synchronization, the replication slot cannot be deleted automatically — delete it manually from the source database to avoid storage accumulation and potential source instance unavailability. If the synchronization task is released or fails, DTS clears the replication slot automatically. After a primary/secondary switchover in the source PostgreSQL database, log on to the secondary database to clear the replication slot manually.
  • During full data synchronization, concurrent INSERT operations cause fragmentation in the tables of the destination database. After full data synchronization is complete, the tablespace of the destination database is larger than that of the source database.

  • For table-level data synchronization, if data is written to the destination database only through DTS, you can use Data Management (DMS) to perform online DDL operations. For more information, see Perform lock-free DDL operations.

  • Write data to the destination database only through DTS during migration. Writing through other tools risks data inconsistency and potential data loss if you later run online DDL operations through Data Management (DMS).

  • DTS does not validate metadata such as sequences. Check sequence validity manually.

  • After switching workloads to the destination cluster, new sequence values do not automatically continue from the maximum value in the source. Before switching, query the maximum sequence values in the source and set them as the initial values in the destination. Use the following statement to query the maximum sequence values:

    do language plpgsql $$
    declare
      nsp name;
      rel name;
      val int8;
    begin
      for nsp,rel in select nspname,relname from pg_class t2 , pg_namespace t3 where t2.relnamespace=t3.oid and t2.relkind='S'
      loop
        execute format($_$select last_value from %I.%I$_$, nsp, rel) into val;
        raise notice '%',
        format($_$select setval('%I.%I'::regclass, %s);$_$, nsp, rel, val+1);
      end loop;
    end;
    $$;

Billing

The upgrade feature is currently in the free-trial phase. No fees are charged for synchronization tasks during this period.

Sync typeBilling rule
Schema synchronization and full data synchronizationFree for 30 days after the task is created. After 30 days, the task is cancelled automatically with no charges.
Incremental data synchronizationFree for 30 days after the task is created. After 30 days, the task is cancelled automatically with no charges.

You are charged for the destination PolarDB cluster regardless of the free-trial status of the synchronization tasks.

To check the remaining validity period of a synchronization task, go to the PolarDB console, open the Basic Information page of your cluster, and find the RDS Migration section.

Switchover with endpoints

When upgrading, select Switch with Endpoints (Connection Changes Not Required) to exchange endpoints between the source ApsaraDB RDS for PostgreSQL instance and the destination PolarDB cluster. Applications connect to the PolarDB cluster without any configuration changes.

image

Before using this option, note the following:

  • Only endpoints are exchanged — vSwitches and virtual IP addresses are not.

  • Endpoint exchange requires both the source instance and the destination cluster to have the corresponding endpoint types. By default, only primary endpoints in the internal network can be exchanged.

  • Primary endpoints are always exchanged. You can also choose to exchange:

    • Dedicated proxy endpoints (source) with default cluster endpoints (destination)

    • Read-only endpoints (source) with custom endpoints (destination)

  • A PolarDB cluster supports up to 7 cluster endpoints, so up to 7 dedicated proxy or read-only endpoints from the source instance can be exchanged.

  • If you need endpoints that do not yet exist, create them before the switchover. For PolarDB cluster endpoints, see View or apply for an endpoint. For ApsaraDB RDS for PostgreSQL endpoints, see Configure endpoints for an RDS instance.

  • Ports are not exchanged. Make sure source and destination ports match, except for custom endpoint ports. To modify a port, see View and manage instance endpoints and ports.

  • After the endpoint exchange, DNS cache expiration may temporarily cause connection failures or read-only connections to the PolarDB cluster. Flush the DNS cache on your server to resolve this.

  • After the endpoint exchange, use the latest version of DMS and the cluster ID (not the endpoint) to log on to the PolarDB database.