Use Data Transmission Service (DTS) to migrate data between ApsaraDB RDS for PostgreSQL instances with minimal or no downtime. DTS supports schema migration, full data migration, and incremental data migration. Combine all three types to keep your source database operational throughout the migration.
Choose a migration strategy
Select a migration type combination based on your requirements.
| Strategy | Migration types | Suitable for | Downtime required |
|---|---|---|---|
| Full migration | Schema migration + Full data migration | Small databases where you can stop writes during migration | Yes — stop writes to the source before starting |
| Zero-downtime migration | Schema migration + Full data migration + Incremental data migration | Production databases that must remain available | No |
For zero-downtime migration, DTS first migrates the schema and historical data, then continuously replicates incremental changes until you cut over to the destination instance.
What DTS migrates
Supported objects:
Tables: SCHEMA, TABLE (including PRIMARY KEY, UNIQUE KEY, FOREIGN KEY, built-in data types, and DEFAULT CONSTRAINT)
Other objects: VIEW, PROCEDURE (PostgreSQL 11 or later), FUNCTION, RULE, SEQUENCE, EXTENSION, TRIGGER, AGGREGATE, INDEX, OPERATOR, DOMAIN
Custom data types: COMPOSITE, ENUM, RANGE
Constraints: primary keys, foreign keys, UNIQUE constraints, CHECK constraints
SQL operations supported for incremental migration:
| Operation type | SQL operations |
|---|---|
| DML | INSERT, UPDATE, DELETE |
| DDL | CREATE TABLE, DROP TABLE, ALTER TABLE (including RENAME TABLE, ADD COLUMN, ADD COLUMN DEFAULT, ALTER COLUMN TYPE, DROP COLUMN, ADD CONSTRAINT, ADD CONSTRAINT CHECK, ALTER COLUMN DROP DEFAULT), TRUNCATE TABLE (PostgreSQL 11 or later), CREATE INDEX ON TABLE |
DDL migration in incremental tasks requires the source database account to be a privileged account and the ApsaraDB RDS for PostgreSQL minor engine version to be 20210228 or later. For tasks created before May 12, 2023 (Singapore region), you must create a trigger and function in the source database to capture DDL information. For details, see Use triggers and functions to implement incremental DDL migration for PostgreSQL databases.
DDL limitations for incremental migration:
Additional clauses such as
CASCADEorRESTRICTare not migrated.DDL from sessions where
SET session_replication_role = replicais executed is not supported.DDL executed by calling a FUNCTION is not supported.
If a single transaction contains both DML and DDL, the DDL is not migrated.
If a transaction contains DDL for objects not being migrated, the DDL is not migrated.
DDL executed through the Server Programming Interface (SPI) is not supported.
For data migration tasks created before May 12, 2023 (Singapore), you must create a trigger and a function in the source database to capture DDL information before you configure the task. For more information, see Use triggers and functions to implement incremental DDL migration for PostgreSQL databases.
Data of the
BITtype is not supported during incremental data migration.
Objects not migrated:
DTS does not migrate temporary tables, internal triggers, C-language functions, internal functions for PROCEDURE and FUNCTION, TimescaleDB extension tables, or tables with cross-schema inheritance.
Prerequisites
Before you begin, ensure that you have:
Created both the source and destination ApsaraDB RDS for PostgreSQL instances. For details, see Create an ApsaraDB RDS for PostgreSQL instance
Confirmed the destination instance version is the same as or later than the source instance version (migrating from a newer version to an older version causes compatibility issues)
Confirmed available disk space on the destination instance exceeds the storage space used by the source instance
Enabled Logical Replication Slot Failover on the source RDS for PostgreSQL instance. For details, see Logical Replication Slot Failover
(Incremental migration) Configured Write-Ahead Logging (WAL) on the source database:
Set
wal_leveltologicalFor an incremental migration task: retained WAL logs for more than 24 hours
For tasks that include both full and incremental migration: retained WAL logs for at least 7 days (after full migration completes, you can reduce the retention period to more than 24 hours)
If DTS cannot obtain WAL logs because the retention period is shorter than required, or if data inconsistency or loss occurs as a result, this is not covered by the DTS Service-Level Agreement (SLA).
(Incremental migration) Run the following command on each table to be migrated before writing data to the source database. Run it during off-peak hours and do not lock the tables, as locking can cause a deadlock.
When the instance runs for the first time.
When the migration object granularity is set to Schema, and a new table is created in the schema or an existing table is rebuilt using the RENAME command.
ALTER TABLE schema.table REPLICA IDENTITY FULL;Replace
schemaandtablewith the actual schema name and table name. If you skip the related precheck, DTS runs this command automatically during instance initialization. This applies in the following scenarios:(Tables with a SERIAL field) Plan to also migrate the associated Sequence object by selecting Sequence in the source objects, or by migrating the entire schema. Otherwise, the migration instance may fail.
Required database account permissions
| Database | Schema migration | Full data migration | Incremental data migration |
|---|---|---|---|
| Source ApsaraDB RDS for PostgreSQL instance | USAGE permission on the pg_catalog schema | SELECT permission on objects to be migrated | A privileged account that owns the selected database. For ApsaraDB RDS for PostgreSQL 9.4 instances migrating DML only: REPLICATION permission |
| Destination ApsaraDB RDS for PostgreSQL instance | CREATE and USAGE permissions on objects to be migrated | Owner permission on the schema | — |
For instructions on creating accounts and granting permissions, see Create an account and Create a database.
Create a migration task
Step 1: Go to the data migration page
DTS console
Log on to the DTS console.DTS console
In the left-side navigation pane, click Data Migration.
In the upper-left corner, select the region where the migration instance will reside.
DMS console
The actual steps vary based on your DMS console mode and layout. For details, see Simple mode and Customize the layout and style of the DMS console.
Log on to the DMS console.DMS console
In the top navigation bar, move the pointer over Data + AI > DTS (DTS) > Data Migration.
From the drop-down list next to Data Migration Tasks, select the region where the migration instance will reside.
Step 2: Configure source and destination databases
Click Create Task.
WarningAfter configuring the source and destination databases, read the Limits displayed at the top of the page before proceeding. Skipping this step can cause task failure or data inconsistency.
Configure the source database:
Parameter Description Task Name A name for the DTS task. DTS generates a name automatically. Use a descriptive name to identify the task easily. The name does not need to be unique. Select Existing Connection If the source instance is registered with DTS, select it from the drop-down list — DTS populates the remaining parameters automatically. Otherwise, configure the parameters below. In the DMS console, select the instance from Select a DMS database instance. Database Type Select PostgreSQL. Connection Type Select Cloud Instance. Instance Region Select the region where the source instance resides. Instance ID Select the ID of the source instance. Database Name Enter the name of the database containing the objects to migrate. The database name cannot contain a hyphen (-). Database Account Enter the database account. For required permissions, see Required database account permissions. Database Password Enter the password for the database account. Encryption Select Non-encrypted or SSL-encrypted. For SSL, upload the CA Certificate and, if using client certificates, upload the Client Certificate, Private Key of Client Certificate, and specify the Private Key Password of Client Certificate. For SSL configuration instructions, see SSL encryption. Configure the destination database using the same fields. For Instance ID, select the destination instance.
Click Test Connectivity and Proceed.
DTS server CIDR blocks must be added to the security settings of both the source and destination databases. For details, see Add DTS server IP addresses to a whitelist.
Step 3: Select objects to migrate
On the Configure Objects page, configure the following settings:
| Setting | Options and notes |
|---|---|
| Migration Types | For full migration: select Schema Migration and Full Data Migration. For zero-downtime migration: select Schema Migration, Full Data Migration, and Incremental Data Migration. If you do not select Incremental Data Migration, do not write new data to the source during migration. |
| Processing Mode of Conflicting Tables | Precheck and Report Errors: the precheck fails if the destination has tables with the same names as the source. Use object name mapping to resolve conflicts without renaming source objects. Ignore Errors and Proceed: skips the conflict check. During full migration, conflicting records in the destination are retained. During incremental migration, conflicting records are overwritten. If schemas differ, only specific columns are migrated or the task fails. |
| Source Objects | Select objects from the Source Objects section and click the arrow icon to add them to Selected Objects. You can select at the schema or table level. Selecting tables does not migrate views, triggers, or stored procedures. If a table has a SERIAL field and you select Schema Migration, also select Sequence or migrate the entire schema. |
| Selected Objects | Right-click an object to rename it for the destination instance. For bulk renaming, click Batch Edit. Right-click a table to set a WHERE clause filter or select specific SQL operations for incremental migration. Renaming an object may cause dependent objects to fail migration. |
Tables to be migrated must have a primary key or UNIQUE constraint with unique fields. If a table does not meet this requirement, duplicate data may appear in the destination database. If the destination table is not created by DTS (that is, Schema Migration is not selected), the destination table must have the same primary key or non-null UNIQUE constraint as the source table.
If you migrate objects at the table level and apply edits such as column name mapping, a single task supports a maximum of 1,000 tables. If you exceed this limit, split the tables across multiple tasks or configure the task to migrate the entire database.
Step 4: Configure advanced settings
Click Next: Advanced Settings and configure the following:
| Setting | Description |
|---|---|
| Dedicated Cluster for Task Scheduling | By default, DTS uses the shared cluster. For higher stability, purchase and specify a dedicated cluster. For details, see What is a DTS dedicated cluster. |
| Retry Time for Failed Connections | How long DTS retries after a connection failure. Valid values: 10–1,440 minutes. Default: 720 minutes. Set to more than 30 minutes. If multiple tasks share the same source or destination database, the value set last takes effect. You are charged for the DTS instance during retries. |
| Retry Time for Other Issues | How long DTS retries after DDL or DML operation failures. Valid values: 1–1,440 minutes. Default: 10 minutes. Set to more than 10 minutes. This value must be less than Retry Time for Failed Connections. |
| Enable Throttling for Full Data Migration | Limits read/write throughput during full migration to reduce database load. Configure Queries per second (QPS) to the source database, RPS of Full Data Migration, and Data migration speed for full migration (MB/s). Available only when Full Data Migration is selected. |
| Enable Throttling for Incremental Data Migration | Limits throughput during incremental migration. Configure RPS of Incremental Data Migration and Data migration speed for incremental migration (MB/s). Available only when Incremental Data Migration is selected. |
| Environment Tag | (Optional) Tag the instance to identify its environment. |
| Configure ETL | Enable the extract, transform, and load (ETL) feature to process data during migration. For details, see What is ETL? and Configure ETL in a data migration or data synchronization task. |
| Monitoring and Alerting | Configure alerts for task failures or when migration latency exceeds a threshold. For details, see alert notification settingsConfigure monitoring and alerting. |
Step 5: Configure data verification (optional)
Click Next Step: Data Verification to set up a data verification task. For details, see Configure a data verification task.
DTS validates data content but does not validate metadata such as Sequences. Validate Sequences manually before cutover.
Step 6: Run the precheck and purchase the instance
Click Next: Save Task Settings and Precheck.
To preview the API parameters for this task, hover over Next: Save Task Settings and Precheck and click Preview OpenAPI parameters.
Wait for the precheck to complete.
If an item fails, click View Details, resolve the issue, then click Precheck Again.
If an alert is triggered for an item you can safely ignore, click Confirm Alert Details, then Ignore > OK > Precheck Again.
WarningIgnoring precheck alerts may result in data inconsistency.
After Success Rate reaches 100%, click Next: Purchase Instance.
On the Purchase Instance page, configure Instance Class for the migration instance:
Parameter Description Resource Group The resource group for the migration instance. Default: default resource group. Instance Class Controls migration speed. Select based on your workload. For details, see Instance classes of data migration instances. Read and accept Data Transmission Service (Pay-as-you-go) Service Terms, then click Buy and Start > OK.
Monitor the migration
After starting the task, check its progress on the Data Migration page.
Full migration only: the task stops automatically when complete. Status changes to Completed.
Incremental migration: the task runs continuously and does not stop automatically. Status shows Running.
Limitations and operational notes
Source database requirements
Tables to be migrated must have a primary key or UNIQUE constraint with unique fields. Otherwise, duplicate data may appear in the destination database.
The database name cannot contain a hyphen (-), for example
dts-testdata.If you migrate at the table level and apply edits such as column name mapping, a single task supports a maximum of 1,000 tables. Exceed this limit by splitting the tables across multiple tasks or migrating the entire database.
Do not perform DDL operations on the source database during schema migration or full data migration — this causes the task to fail.
For full migration only: do not write new data to the source database during migration. To maintain real-time consistency, use schema migration, full data migration, and incremental data migration together.
If a single incremental change exceeds 256 MB, the migration instance fails and cannot be recovered. Reconfigure the migration instance.
If the source database has long-running transactions during incremental migration, WAL data before the transaction commits may not be cleared, causing WAL accumulation and potential disk space exhaustion on the source.
If you perform a major version upgrade on the source database while the migration instance is running, the instance fails and cannot be recovered. Reconfigure the migration instance.
Do not change the source instance endpoint or zone during migration.
Post-migration behavior
Full data migration uses concurrent INSERT operations, which can cause table fragmentation on the destination. The table storage space on the destination may be larger than on the source after migration.
DTS uses
ROUND(COLUMN,PRECISION)to read FLOAT and DOUBLE values. If precision is not explicitly defined, DTS uses 38 for FLOAT and 308 for DOUBLE. Confirm this precision meets your requirements before migrating.DTS attempts to resume failed tasks for seven days. Before switching workloads to the destination instance, end or release the task, or revoke the write permissions of the DTS database account on the destination instance. This prevents the source data from overwriting destination data if the task resumes automatically.
Temporary tables created by DTS
To ensure the accuracy of the displayed latency for incremental data migration, DTS adds a heartbeat table named dts_postgres_heartbeat to the source database. DTS also creates the following tables in the source database to support incremental migration. Do not delete them during migration — they are removed automatically when the DTS instance is released.
public.dts_pg_class, public.dts_pg_attribute, public.dts_pg_type, public.dts_pg_enum, public.dts_postgres_heartbeat, public.dts_ddl_command, public.dts_args_session, public.aliyun_dts_instance
session_replication_role parameter
If a task includes full or incremental migration and the tables to be migrated contain foreign keys, triggers, or event triggers, DTS sets session_replication_role to replica at the session level. If the destination database account is a privileged account or has superuser permissions, DTS sets this automatically. Otherwise, you must manually set the parameter to replica in the destination database. During this period, cascade update or delete operations in the source database may cause data inconsistency. After the DTS task is released, change session_replication_role back to origin.
Replication slot management
DTS creates a replication slot with the prefix dts_sync_ in the source database to obtain incremental logs from the last 15 minutes. When the migration fails or the instance is released, DTS attempts to clean up the replication slot automatically.
If you change the source database account password or remove DTS IP addresses from the whitelist during migration, the replication slot cannot be cleaned up automatically. Clean it up manually to prevent disk space accumulation that could make the source database unavailable. If a primary/secondary failover occurs in the source database, log on to the secondary database to perform the cleanup.
Sequences after cutover
After you switch workloads to the destination instance, new Sequences do not automatically increment from the maximum value of the source Sequence. Update the Sequence value in the destination database before cutover. For details, see Update the Sequence value in the destination database.
Instance failure recovery
If a migration instance fails, DTS helpdesk will try to recover the instance within 8 hours. During recovery, DTS instance parameters (not the database parameters) may be adjusted. For details on adjustable parameters, see Modify instance parameters.
What's next
Overview of data migration scenarios — review supported source and destination database versions
Update the Sequence value in the destination database — update Sequences before cutover
Map object names — rename objects at the destination
Configure a data verification task — verify data accuracy after migration