Migrate data between ApsaraDB RDS for PostgreSQL instances using Data Transmission Service (DTS). DTS supports schema migration, full data migration, and incremental data migration. Running all three types together lets your application stay online throughout the migration with minimal downtime.
Prerequisites
Before you begin, make sure that:
-
Source and destination ApsaraDB RDS for PostgreSQL instances are created. See Create an instance
-
The destination database version is the same as or later than the source database version. A lower destination version may cause compatibility issues
-
The destination instance has more available storage space than the total data size in the source instance
For supported source and destination database versions, see Overview of data migration scenarios.
Migration types
DTS supports three migration types that you can combine based on your requirements.
| Migration type | What it does |
|---|---|
| Schema migration | Copies the schemas of selected objects (tables, views, indexes, and so on) from source to destination |
| Full data migration | Copies all existing data from source to destination |
| Incremental data migration | Continuously replicates data changes from source to destination after full data migration completes, keeping the destination in sync while your application runs |
Choose a migration strategy:
-
Full migration only (with downtime): Select Schema Migration and Full Data Migration. Stop writes to the source before migration starts to ensure data consistency.
-
Online migration (minimal downtime): Select Schema Migration, Full Data Migration, and Incremental Data Migration. Your application stays online during migration. Stop writes to the source only at cutover.
Supported objects
The following object types can be migrated:
-
SCHEMA and TABLE — including PRIMARY KEY, UNIQUE KEY, FOREIGN KEY, built-in data types, and DEFAULT constraints
-
VIEW
-
PROCEDURE — PostgreSQL V11 or later only
-
FUNCTION, RULE, SEQUENCE, EXTENSION, TRIGGER, AGGREGATE, INDEX, OPERATOR, DOMAIN
SQL operations supported for incremental migration
| Operation type | Supported statements |
|---|---|
| DML | INSERT, UPDATE, DELETE |
| DDL | See details below |
DDL migration details:
DDL migration is only available for tasks created after October 1, 2020.
To migrate DDL operations in tasks created before May 12, 2023, first create a trigger and a function in the source database. See Use triggers and functions to implement incremental DDL migration for PostgreSQL databases.
If the source database uses a privileged account and the minor engine version is 20210228 or later, the following DDL statements are supported:
-
CREATE TABLE, DROP TABLE
-
ALTER TABLE (RENAME TABLE, ADD COLUMN, ADD COLUMN DEFAULT, ALTER COLUMN TYPE, DROP COLUMN, ADD CONSTRAINT, ADD CONSTRAINT CHECK, ALTER COLUMN DROP DEFAULT)
-
TRUNCATE TABLE — source must be PostgreSQL V11 or later
-
CREATE INDEX ON TABLE
DDL migration limitations:
-
BIT type data cannot be migrated during incremental migration.
-
CASCADE and RESTRICT modifiers in DDL statements cannot be migrated.
-
DDL statements executed in a session where
SET session_replication_role = replicawas run cannot be migrated. -
DDL statements invoked through functions cannot be migrated.
-
If a batch submission contains both DML and DDL statements, the DDL statements are skipped.
-
If a batch submission contains DDL statements that are not supported for migration, those statements are skipped.
Required permissions
Grant the following permissions to the database accounts used by DTS. Follow the principle of least privilege — avoid using a superuser account unless the required permissions cannot be granted separately.
| Database | Schema migration | Full data migration | Incremental data migration |
|---|---|---|---|
| Source ApsaraDB RDS for PostgreSQL | USAGE on the pg_catalog schema |
SELECT on objects to be migrated | Privileged account that is the owner of the database. For PostgreSQL 9.4 with DML-only migration, only the REPLICATION permission is required. |
| Destination ApsaraDB RDS for PostgreSQL | CREATE and USAGE on objects to be migrated | Schema owner permissions | — |
To create accounts and grant permissions, see Create an account and Create a database.
Limitations
Review these limitations before configuring a migration task. Ignoring them can cause task failures or data inconsistency.
Source database limitations
| Limitation | Details | Impact if violated |
|---|---|---|
| Primary key or unique constraint required | Tables must have PRIMARY KEY or UNIQUE constraints with all unique fields. If a table in the destination was created without DTS schema migration, make sure its PRIMARY KEY or NOT NULL UNIQUE constraint matches the source table. Tables to be migrated must also have PRIMARY KEY, FOREIGN KEY, UNIQUE, or CHECK constraints. | Destination may contain duplicate records |
| No hyphens in database name | The source database name cannot contain hyphens (-). For example, dts-testdata is invalid. Rename the database before migration. |
Migration task fails to start |
| 1,000-table limit per task when editing table names | If you select tables as migration objects and need to rename tables or columns in the destination, a single task can migrate up to 1,000 tables. For more than 1,000 tables, create multiple tasks or migrate the entire database instead. | Request error occurs |
| No temporary tables or internal C-language procedures | DTS cannot migrate temporary tables, internal triggers, or internal procedures and functions written in C. | These objects are silently skipped |
| Custom type support | DTS can migrate custom parameters of the COMPOSITE, ENUM, and RANGE types. | Other custom types are not migrated |
wal_level must be logical for incremental migration |
Set wal_level = logical in the source instance before configuring incremental migration. |
Incremental migration cannot start |
| WAL log retention | For incremental-only migration: retain write-ahead logging (WAL) logs for more than 24 hours. For full + incremental migration: retain WAL logs for at least 7 days. After full migration completes, you can reduce the retention to more than 24 hours. | Insufficient retention causes task failure, data inconsistency, or data loss |
| No DDL during schema migration or full migration | Do not run DDL operations that change database or table schemas while schema migration or full data migration is running. | Migration task fails |
| No writes to source during full-only migration | If you run only full data migration (without incremental), do not write to the source database during migration. To avoid this restriction, select full + incremental migration. | Concurrent writes cause data inconsistency |
| 256 MB single-record limit for incremental migration | If a single incremental change exceeds 256 MB, the task fails and cannot be recovered. You must reconfigure the task. | Task fails with no recovery option |
| Long-running transactions accumulate WAL logs | Long-running transactions in the source database prevent WAL log cleanup. Commit or terminate long-running transactions before migration. | Source disk space may be exhausted during incremental migration |
| Major version upgrades break running tasks | A major version upgrade on the source database while a migration task is running causes the task to fail with no recovery. Reconfigure the task after the upgrade. | Task fails with no recovery option |
Other limitations
| Limitation | Details |
|---|---|
| Primary/secondary switchover requires Logical Replication Slot Failover | Before performing a primary/secondary switchover on the source instance, enable the Logical Replication Slot Failover feature. This prevents logical subscriptions from being interrupted. See Logical Replication Slot Failover. |
| One database per task | A single task migrates data from one database only. Create separate tasks for each database. |
| No schema-inherited tables | Tables that are inherited across schemas cannot be migrated. |
| SERIAL field requires sequence migration | If a table has a SERIAL field and you select Schema Migration, also select Sequence or migrate the entire schema. Otherwise, the task fails. |
| New tables created during incremental migration | If you add a new table to a schema or rename a table using RENAME during incremental migration, run ALTER TABLE schema.table REPLICA IDENTITY FULL; before writing data to that table. Do not lock the table when running this statement to avoid deadlocks. Run this during off-peak hours. |
| Metadata validity not checked | DTS does not validate metadata such as sequences. Verify metadata manually after migration. |
| Sequences in destination do not start from source maximum | After switching workloads to the destination, new sequences start from their initial values, not from the maximum value in the source. Update the starting values of sequences in the destination before cutover. |
| Temporary tables created by DTS | DTS creates the following temporary tables in the source database to support incremental migration. Do not delete them during migration — they are automatically removed after the DTS instance is released: public.dts_pg_class, public.dts_pg_attribute, public.dts_pg_type, public.dts_pg_enum, public.dts_postgres_heartbeat, public.dts_ddl_command, public.dts_args_session, public.aliyun_dts_instance |
session_replication_role set to replica during migration |
If the destination account is a privileged account or has the superuser role, DTS sets session_replication_role to replica at the session level during full and incremental migration. If the account does not have these permissions, set session_replication_role = replica manually. Cascade UPDATE or DELETE operations in the source during this time may cause data inconsistency. After the task is released, set the value back to origin. |
| Heartbeat table created in source | DTS creates a heartbeat table named dts_postgres_heartbeat in the source database to track incremental migration latency. |
| Replication slot created in source | DTS creates a replication slot prefixed with dts_sync_ in the source. This slot retains the last 15 minutes of incremental logs. The slot is automatically deleted when the DTS instance is released or when the task fails. If you change the source database password or remove DTS IP addresses from the IP whitelist, delete the replication slot manually to prevent slot accumulation. After a primary/secondary switchover, log in to the secondary instance to delete the slot. |
| Full migration causes table fragmentation | Concurrent INSERT operations during full migration cause table fragmentation in the destination. The destination tablespace will be larger than the source after full migration completes. |
| FLOAT and DOUBLE precision | DTS uses ROUND(COLUMN, PRECISION) to retrieve FLOAT and DOUBLE values. If no precision is specified, FLOAT defaults to 38 digits and DOUBLE defaults to 308 digits. Verify these precision settings meet your requirements. |
| Failed tasks may resume and overwrite destination data | DTS retries a failed task for up to 7 days. Before switching workloads to the destination, stop or release failed tasks, or run REVOKE to remove write permissions from the DTS account on the destination database. |
| DTS support may modify task parameters | If a task fails, DTS technical support attempts to restore it within 8 hours. The task may be restarted and task parameters (not database parameters) may be modified. |
Special cases
Do not modify the endpoint or zone of the source ApsaraDB RDS for PostgreSQL instance while a migration task is running. This causes the task to fail.
Configure a migration task
Step 1: Go to the Data Migration page
Use either the DTS console or the DMS console.
DTS console:
-
Log in to the DTS console.
-
In the left-side navigation pane, click Data Migration.
-
In the upper-left corner, select the region where the migration instance will reside.
DMS console:
The actual steps may vary based on your DMS console mode and layout. See Simple mode for layout reference, or see how to customize the DMS console layout.
-
Log in to the DMS console.
-
In the top navigation bar, move the pointer over Data + AI > DTS (DTS) > Data Migration.
-
From the drop-down list to the right of Data Migration Tasks, select the region where the migration instance will reside.
Step 2: Create a task
Click Create Task to open the task configuration page.
Step 3: Configure source and destination databases
After configuring the source and destination databases, read the Limits displayed at the top of the page. Skipping this step can cause task failures or data inconsistency.
Configure the following parameters for both the source and destination databases.
| Section | Parameter | Description |
|---|---|---|
| N/A | Task Name | A name for the DTS task. DTS generates a name automatically. Specify a descriptive name to identify the task easily. The name does not need to be unique. |
| Source Database | Select Existing Connection | If the instance is already registered with DTS, select it from the drop-down list. DTS auto-fills the remaining parameters. Otherwise, configure the parameters below. In the DMS console, select the instance from the Select a DMS database instance drop-down list. |
| Database Type | Select PostgreSQL. | |
| Access Method | Select Alibaba Cloud Instance. | |
| Instance Region | The region of the source instance. | |
| Instance ID | The ID of the source instance. | |
| Database Name | The name of the database containing the objects to migrate. | |
| Database Account | The account for the source instance. See Required permissions. | |
| Database Password | The password for the database account. | |
| Encryption | Select Non-encrypted (default) or SSL-encrypted. For SSL encryption, upload the CA Certificate, and optionally the Client Certificate, Private Key of Client Certificate, and Private Key Password of Client Certificate. See SSL encryption for setup instructions. | |
| Destination Database | Select Existing Connection | Same as source. |
| Database Type | Select PostgreSQL. | |
| Access Method | Select Alibaba Cloud Instance. | |
| Instance Region | The region of the destination instance. | |
| Instance ID | The ID of the destination instance. | |
| Database Name | The name of the destination database. | |
| Database Account | The account for the destination instance. See Required permissions. | |
| Database Password | The password for the database account. | |
| Encryption | Same options as the source. |
Step 4: Test connectivity
Click Test Connectivity and Proceed at the bottom of the page.
DTS server CIDR blocks must be added to the security settings of both the source and destination databases. For instructions, see Add the CIDR blocks of DTS servers. If the source or destination uses a non-Alibaba Cloud Instance access method, click Test Connectivity in the CIDR Blocks of DTS Servers dialog.
Step 5: Configure objects to migrate
On the Configure Objects page, configure the migration settings.
Migration types:
| Goal | Select |
|---|---|
| Full migration (with downtime) | Schema Migration and Full Data Migration |
| Online migration (minimal downtime) | Schema Migration, Full Data Migration, and Incremental Data Migration |
If you do not select Incremental Data Migration, do not write data to the source during migration to ensure data consistency. Schema Migration copies foreign keys to the destination.
Processing mode of conflicting tables:
| Mode | Behavior |
|---|---|
| Precheck and Report Errors | Checks for tables with identical names in source and destination before migration. If conflicts exist, the precheck fails and the task does not start. To resolve conflicts without deleting destination tables, use object name mapping to rename migrated tables. See Map object names. |
| Ignore Errors and Proceed | Skips the conflict check. During full migration, conflicting records are not migrated (existing destination records are kept). During incremental migration, conflicting records overwrite existing destination records. Use this mode with caution — it can expose your data to inconsistency risks. |
Source Objects:
Select one or more schemas or tables from Source Objects and click
to add them to Selected Objects.
If you select tables (rather than schemas), DTS does not migrate other objects such as views, triggers, and stored procedures. If a table has a SERIAL data type and Schema Migration is selected, also select Sequence or migrate the full schema.
Selected Objects:
-
To rename a single object in the destination, right-click it in Selected Objects. See Map the name of a single object.
-
To rename multiple objects at once, click Batch Edit. See Map multiple object names at a time.
-
To filter rows by condition, right-click a table and specify a WHERE condition. See Specify filter conditions.
-
To select specific SQL operations to migrate for a table, right-click the table and choose the operations. See SQL operations supported for incremental migration.
Renaming an object with object name mapping may cause dependent objects to fail migration.
Click Next: Advanced Settings.
Advanced settings:
| Parameter | Description |
|---|---|
| Dedicated Cluster for Task Scheduling | By default, DTS uses the shared cluster. For higher stability, purchase a dedicated cluster. See What is a DTS dedicated cluster. |
| Retry Time for Failed Connections | How long DTS retries after a connection failure. Range: 10–1,440 minutes. Default: 720. Set this to at least 30. If DTS reconnects within the retry window, the task resumes; otherwise, it fails. Note
If multiple tasks share the same source or destination, the most recently set retry time applies to all. DTS charges continue during retries. |
| Retry Time for Other Issues | How long DTS retries after DDL or DML operation failures. Range: 1–1,440 minutes. Default: 10. Set this to greater than 10. This value must be less than Retry Time for Failed Connections. |
| Enable Throttling for Full Data Migration | Limits migration throughput to reduce load on the source and destination. Configure QPS to the source database, RPS of Full Data Migration, and Data migration speed for full migration (MB/s). Available only when Full Data Migration is selected. |
| Enable Throttling for Incremental Data Migration | Limits incremental migration throughput. Configure RPS of Incremental Data Migration and Data migration speed for incremental migration (MB/s). Available only when Incremental Data Migration is selected. |
| Environment Tag | An optional tag to identify the DTS instance by environment. |
| Configure ETL | Enable extract, transform, and load (ETL) to process data during migration. Select Yes to enter data processing statements. See Configure ETL in a data migration or data synchronization task. |
| Monitoring and Alerting | Configure alerts for task failures or high migration latency. Select Yes and configure the alert threshold and notification settings. See Configure monitoring and alerting when you create a DTS task. |
Click Next Step: Data Verification to configure data verification. See Configure a data verification task.
Step 6: Run the precheck
Click Next: Save Task Settings and Precheck.
To preview the API parameters for this task configuration, hover over Next: Save Task Settings and Precheck and click Preview OpenAPI parameters before proceeding.
DTS runs a precheck before starting the task. The task cannot start until the precheck passes.
-
If any precheck item fails, click View Details to see the cause. Fix the issue and click Precheck Again.
-
If an item triggers an alert:
-
If the alert cannot be ignored, fix the issue and rerun the precheck.
-
If the alert can be ignored, click Confirm Alert Details, then click Ignore > OK > Precheck Again. Ignoring an alert may cause data inconsistency.
-
Step 7: Purchase an instance and start the task
-
Wait until Success Rate reaches 100%, then click Next: Purchase Instance.
-
On the Purchase Instance page, configure the instance:
Section Parameter Description New Instance Class Resource Group The resource group for the migration instance. Default: default resource group. See What is Resource Management?. Instance Class Controls migration speed. Select based on your data volume and time requirements. See Instance classes of data migration instances. -
Read and accept the Data Transmission Service (Pay-as-you-go) Service Terms by selecting the check box.
-
Click Buy and Start, then click OK in the confirmation dialog.
The task appears on the Data Migration page.
-
If the task does not include incremental migration, it stops automatically when complete. The status shows Completed.
-
If the task includes incremental migration, it runs continuously. The status shows Running.
Cut over to the destination database
For online migrations using incremental data migration, follow these steps to cut over with minimal data loss.
-
Monitor migration latency. On the Data Migration page, check the latency of the incremental migration task. Wait until the latency drops to 0 or near 0, which means the destination is almost fully caught up with the source.
-
Stop writes to the source. Once latency is near 0, stop your application from writing to the source database.
-
Confirm latency reaches 0. After stopping writes, confirm the incremental migration latency reaches 0. This ensures all changes have been replicated to the destination.
-
Update sequences in the destination. New sequences in the destination do not increment from the maximum value of sequences in the source. Before switching traffic, update the starting values of all sequences in the destination to avoid ID conflicts.
-
Verify data. If you configured data verification, check the verification results to confirm data consistency between source and destination.
-
Switch your application. Update your application's connection string to point to the destination instance.
-
Release the DTS task. After confirming the application is running correctly on the destination, stop and release the DTS migration task. DTS automatically removes temporary tables and the replication slot from the source after the task is released.
DTS retries a failed task for up to 7 days. Before switching workloads to the destination, stop or release any failed tasks, or run REVOKE to remove write permissions from the DTS account on the destination database. Otherwise, a resumed failed task may overwrite data in the destination.
What's next
-
Logical Replication Slot Failover — enable this before performing a primary/secondary switchover on the source instance
-
Map object names — rename objects in the destination during migration
-
Configure a data verification task — verify data consistency after migration
-
Instance classes of data migration instances — choose the right instance class for your migration speed requirements