Use Data Transmission Service (DTS) to migrate data from a PolarDB for PostgreSQL cluster to an RDS for PostgreSQL instance with minimal downtime.
DTS supports three migration types that you can combine based on your goal:
| Migration type | What it does | When to use |
|---|---|---|
| Schema migration | Copies schema definitions (tables, keys, constraints) to the destination | Always include when the destination schema does not exist yet |
| Full data migration | Copies all existing data at a point in time | Required for any migration |
| Incremental data migration | Continuously replicates changes after the full migration | Add this for near-zero-downtime migration |
For a full migration with minimal downtime, select all three types. For a one-time migration where you can tolerate a maintenance window, select schema migration and full data migration only.
Prerequisites
Before you begin, make sure you have:
A source PolarDB for PostgreSQL cluster — see Create a Database Cluster
A destination RDS for PostgreSQL instance with storage space larger than the space used by the source database — see Create an RDS for PostgreSQL Instance
The
wal_levelparameter on the source cluster set tological— see Configure Cluster Parameters
Important notes
During schema migration, DTS migrates foreign keys from the source database to the destination database.
During full and incremental migration, DTS temporarily disables constraint checks and foreign key cascade operations at the session level. If cascading updates or deletes occur in the source database while the task is running, data inconsistency may occur.
Limitations
Source database requirements
Tables to be migrated must have a primary key or a non-null unique index.
Long-running transactions cause write-ahead log (WAL) accumulation in the source database. If disk space runs low during an incremental migration, commit or terminate long-running transactions.
The PolarDB for PostgreSQL cluster must support and enable Logical Replication Slot Failover to survive primary/secondary switchovers during migration. If the cluster's database engine is PostgreSQL 14 (which does not support this feature), a high availability (HA) switchover during migration causes the migration instance to fail unrecoverably.
Do not perform DDL operations that change the database or table structure during schema migration and full data migration. Otherwise, the migration task fails.
Do not write new data to the source instance during full data migration (without incremental migration). To keep data consistent in real time, run schema migration, full data migration, and incremental data migration together.
A single incremental change that produces more than 256 MB of data causes the migration instance to fail unrecoverably. Reconfigure the migration instance if this occurs.
Other limitations
Each migration task migrates one database. Configure a separate task for each additional database.
DTS does not support migrating TimescaleDB extension tables, tables with cross-schema inheritance, or tables with unique indexes based on expressions.
Schemas created by installing plugins cannot be migrated.
If a table contains a SERIAL field and you include Schema Migration, also select Sequence or migrate the entire schema. Otherwise the migration instance may fail.
Full data migration uses concurrent INSERT operations, which causes table fragmentation. The table space in the destination database will be larger than in the source after full migration completes.
DTS validates data content but does not validate metadata such as sequences. Validate sequences separately before switching over — see Update the sequence value of the destination database.
DTS creates a replication slot with the prefix
dts_sync_in the source database during incremental migration. DTS uses this replication slot to obtain incremental logs from the source database within the last 15 minutes. This slot is automatically cleaned up when the migration fails or the instance is released, unless the source database password changed or the DTS IP address was removed from the whitelist — in which case, clean up the slot manually to prevent disk space accumulation.If a failover occurs on the source database, log on to the secondary database to manually clean up the replication slot.
DTS creates the following temporary tables in the source database to support incremental migration. Do not delete these tables during migration — they are cleaned up automatically when the DTS instance is released:
public.dts_pg_class,public.dts_pg_attribute,public.dts_pg_type,public.dts_pg_enum,public.dts_postgres_heartbeat,public.dts_ddl_command,public.dts_args_session, andpublic.aliyun_dts_instance.For tables with foreign keys, triggers, or event triggers: if the destination database account is a privileged account or has superuser permissions, DTS temporarily sets
session_replication_roletoreplicaat the session level. If the account does not have these permissions, setsession_replication_roletoreplicain the destination database manually. Reset it tooriginafter the DTS migration task is released.When migrating partitioned tables, include both the parent table and all its child tables as migration objects. In PostgreSQL, all data is stored in child tables — omitting any child table causes data inconsistency.
DTS attempts to automatically recover failed tasks within seven days. Therefore, before you switch your business to the destination instance, you must end or release the task, or use the
revokecommand to revoke the write permissions of the account that DTS uses to access the destination instance. This prevents the source data from overwriting the data in the destination instance after the task is automatically recovered.If a task fails, DTS support staff will attempt to restore it within eight hours and may restart the task or adjust its parameters. Only DTS task parameters are modified, not database parameters. See Modify instance parameters for parameters that may be adjusted.
Incremental migration: REPLICA IDENTITY requirement
If the migration instance includes incremental data migration, run the following command on each table to be migrated in the source database before writing data to those tables:
ALTER TABLE schema.table REPLICA IDENTITY FULL;Replace schema and table with the actual schema and table names. Run this during off-peak hours and avoid table lock operations while the command executes to prevent deadlocks.
Run this command in the following situations:
When the instance runs for the first time
When the migration object is a schema and a new table is created, or an existing table is rebuilt using the
RENAMEcommand
If you skip the related precheck item, DTS runs this command automatically during instance initialization.
DDL migration
DDL migration is supported only for tasks created after October 1, 2020.
| Supported DDL statements | Notes |
|---|---|
CREATE TABLE, DROP TABLE | |
ALTER TABLE (including RENAME TABLE, ADD COLUMN, ADD COLUMN DEFAULT, ALTER COLUMN TYPE, DROP COLUMN, ADD CONSTRAINT, ADD CONSTRAINT CHECK, ALTER COLUMN DROP DEFAULT) | |
TRUNCATE TABLE | PostgreSQL 11 and later only |
CREATE INDEX ON TABLE |
DDL migration requires a privileged account in the source database. The following DDL statements are not supported:
Statements with
CASCADEorRESTRICTclausesStatements executed in sessions where
SET session_replication_role = replicais appliedStatements executed by calling functions
Transactions that contain both DML and DDL statements (the DDL statement is not migrated)
DDL statements for objects not selected for migration, within the same transaction
For tasks created before May 12, 2023, create triggers and functions in the source database to capture DDL changes before configuring the migration task. For more information, see Use Triggers and Functions for PostgreSQL DDL Incremental Migration.
Incremental migration does not support the BIT data type.
Billing
| Migration type | Instance fee | Internet traffic fee |
|---|---|---|
| Schema migration and full data migration | Free | Charged when the destination database uses Public IP Address as the access method. See Billing overview. |
| Incremental data migration | Charged. See Billing overview. |
Supported migration objects
DTS migrates schemas and tables, including primary keys, unique keys, foreign keys, built-in data types, and default constraints.
Supported objects may vary by destination database type. Check the console for the current list.
Database account permissions
| Database | Schema migration | Full migration | Incremental migration | How to create |
|---|---|---|---|---|
| PolarDB for PostgreSQL | Privileged account | Privileged account | Privileged account | Create a Database Account |
| RDS for PostgreSQL | CREATE and USAGE on migration objects | Owner on the schema | Owner on the schema | Create an Account |
Create a migration task
Step 1: Open the migration task page
Use one of the following methods to reach the migration task list for the destination region.
From the DTS console
Log on to the DTS console.Data Transmission Service (DTS) console
In the left navigation pane, click Data Migration.
In the upper-left corner, select the region where the migration instance will be located.
From the DMS console
The actual operations may vary based on the mode and layout of the DMS console. For more information, see Simple mode console and Customize the layout and style of the DMS console.
Log on to the DMS console.Data Management (DMS) console
In the top menu bar, choose Data + AI > Data Transmission (DTS) > Data Migration.
To the right of Data Migration Tasks, select the region.
Step 2: Configure source and destination databases
Click Create Task.
Configure the source database:
Field Value Task Name DTS generates a name automatically. Specify a descriptive name for easy identification — it does not need to be unique. Database Type PolarDB for PostgreSQL Connection Type Cloud Instance Instance Region Region where the source PolarDB for PostgreSQL cluster resides Across Alibaba Cloud Accounts Not Cross-Account (for same-account migrations) Instance ID ID of the source PolarDB for PostgreSQL cluster Database Name Name of the source PolarDB for PostgreSQL database Database Account Account for the source database Database Password Password for the account Configure the destination database:
Field Value Database Type PostgreSQL Connection Type Cloud Instance Instance Region Region where the destination RDS for PostgreSQL instance resides Instance ID ID of the destination RDS for PostgreSQL instance Database Name Name of the database in the destination instance that will receive the migrated objects Database Account Account for the destination database Database Password Password for the account Encryption Select Non-encrypted or SSL-encrypted. For SSL encryption, upload the CA Certificate and, if using client certificates, the Client Certificate, Private Key of Client Certificate, and Private Key Password of Client Certificate. Click Test Connectivity and Proceed at the bottom of the page.
Add the DTS server CIDR blocks to the security settings (IP whitelist) of both the source and destination databases before testing. See Add the IP address whitelist of DTS servers. If either database's Access Method is not Alibaba Cloud Instance, also click Test Connectivity in the CIDR Blocks of DTS Servers dialog box.
Step 3: Select migration objects
On the Configure Objects page:
Select the migration types:
If you skip Schema Migration, the destination database must already have the target schema and tables. If you skip Incremental Data Migration, do not write new data to the source instance during migration.
Goal Migration types to select Full migration only (with maintenance window) Schema Migration + Full Data Migration Near-zero-downtime migration Schema Migration + Full Data Migration + Incremental Data Migration Set Processing Mode of Conflicting Tables:
Precheck and Report Errors: DTS checks for tables with the same name in the destination before starting. If a conflict exists, the task does not start. To resolve naming conflicts without deleting data, use Object name mapping.
Ignore Errors and Proceed: Skips the conflict check. During full migration, conflicting records in the destination are kept. During incremental migration, source records overwrite destination records. Use with caution — inconsistent schemas may cause partial or failed migration.
(Optional) Configure Case Policy for Destination Object Names to control case sensitivity for migrated object names. The default is DTS default policy. See Case sensitivity of object names in the destination database.
In the Source Objects box, click the schemas or tables to migrate, then click the right-arrow icon to move them to the Selected Objects box.
- Selecting tables as migration objects excludes views, triggers, and stored procedures. - If a table contains a SERIAL field and Schema Migration is selected, also include Sequence or migrate the entire schema.
(Optional) To rename objects in the destination:
Single object: right-click it in Selected Objects. See Map a single table or column name.
Multiple objects: click Batch Edit in the upper-right corner of the Selected Objects box. See Map multiple table or column names in a batch.
Object name mapping may cause dependent objects to fail migration.
Step 4: Configure advanced settings
Click Next: Advanced Settings and configure the following options:
| Setting | Description |
|---|---|
| Dedicated Cluster for Task Scheduling | DTS uses a shared cluster by default. Purchase a dedicated cluster for more stable task scheduling. |
| Retry Time for Failed Connections | Duration DTS retries after a connection failure. Default: 720 minutes. Range: 10–1440 minutes. Set to at least 30 minutes. The task resumes automatically if reconnected within the retry window; otherwise it fails. |
| Retry Time for Other Issues | Duration DTS retries after non-connectivity errors (DDL or DML exceptions). Default: 10 minutes. Range: 1–1440 minutes. Must be less than Retry Time for Failed Connections. |
| Enable Throttling for Full Data Migration | Limit QPS to the source, RPS of the migration, or migration speed (MB/s) to reduce database load during full migration. Adjustable after the instance starts. |
| Enable Throttling for Incremental Data Migration | Limit RPS or migration speed (MB/s) for incremental migration. Adjustable after the instance starts. |
| Configure ETL | Enable extract, transform, and load (ETL) to apply data transformations during migration. See Configure ETL in a data migration or data synchronization task. |
| Monitoring and Alerting | Configure alert thresholds and notifications. DTS sends alerts when a migration fails or latency exceeds the threshold. |
| Environment Tag | Tag the instance for environment identification (optional). |
Step 5: Configure data validation (optional)
Click Next: Data Validation to set up a data validation task. See Configure data validation.
Step 6: Save the task and run a precheck
Click Next: Save Task Settings and Precheck at the bottom of the page.
To view API parameters for this configuration, hover over the Next: Save Task Settings and Precheck button and click Preview OpenAPI parameters.
DTS runs a precheck before the migration starts. If the precheck fails:
For failed items: click View Details, fix the issue, then click Precheck Again.
For warning items that can be ignored: click Confirm Alert Details > Ignore > OK > Precheck Again. Ignored warnings may cause data inconsistency.
Step 7: Purchase and start the instance
When Success Rate reaches 100%, click Next: Purchase Instance.
On the Purchase page, select the instance class:
Field Description Resource Group Settings Resource group for the instance. Default: default resource group. See What is Resource Management? Instance Class Controls migration speed. See Data migration link specifications. Read and accept Data Transmission Service (Pay-as-you-go) Service Terms.
Click Buy and Start, then click OK in the confirmation dialog box.
The migration task appears on the Data Migration Tasks list page. If the task includes only schema and full migration, it stops automatically when complete and the Status changes to Completed. If the task includes incremental migration, it continues running with Status showing Running.
Step 8: Switch over to the destination database
Before switching your applications to the destination instance:
Wait until all pending changes from the source database have been applied to the destination. If incremental migration is running, monitor the task latency until it reaches zero.
Update sequence values in the destination database. Sequences do not automatically start from the maximum value in the source database after a switchover. See Update the sequence value of the destination database.
End or release the DTS migration task, or revoke the write permissions of the DTS account using the
revokecommand. This prevents DTS from overwriting destination data if the task is automatically recovered.If you manually set
session_replication_roletoreplicain the destination database before migration, reset it tooriginnow.Reconnect your applications to the destination RDS for PostgreSQL instance.