All Products
Search
Document Center

Data Transmission Service:Synchronize data from an ApsaraDB RDS for PostgreSQL instance to an AnalyticDB for PostgreSQL instance

Last Updated:Mar 28, 2026

Use Data Transmission Service (DTS) to synchronize data from an ApsaraDB RDS for PostgreSQL instance to an AnalyticDB for PostgreSQL instance. DTS supports schema synchronization, full data synchronization, and incremental data synchronization, so your analytical warehouse stays continuously updated with minimal manual effort.

Prerequisites

Before you begin, make sure you have:

  • A destination AnalyticDB for PostgreSQL instance with available storage space greater than the total size of data in the source ApsaraDB RDS for PostgreSQL instance. For more information, see Create an instance.

  • A database created in the destination instance to receive the synchronized data. For more information, see the CREATE DATABASE section in SQL syntax.

Billing

Synchronization typeFee
Schema synchronization and full data synchronizationFree
Incremental data synchronizationCharged. See Billing overview.

Limitations

Source database requirements

Table structure

Tables to be synchronized must have PRIMARY KEY or UNIQUE constraints, and all fields must be unique. Otherwise, the destination database may contain duplicate records.

If you create a synchronization task without selecting Schema Synchronization, make sure the destination table has the same PRIMARY KEY or NOT NULL UNIQUE constraints as the source table.

If you select individual tables as the synchronization scope and plan to rename tables or columns in the destination, a single task can handle at most 5,000 tables. For more than 5,000 tables, configure multiple tasks, or synchronize the entire database instead.

WAL log requirements

Parameter or requirementValueNotes
wal_levelMust be set to logicalRequired for logical replication
WAL log retention (incremental sync only)More than 24 hoursInsufficient retention may cause task failure or data loss
WAL log retention (full + incremental sync)At least 7 daysAfter full sync completes, you can set the retention period to more than 24 hours

Operations that cause task failure

OperationImpactMitigation
DDL statements during schema or full data synchronizationTask failsPerform DDL changes before or after the sync window
Major version upgrade on the source databaseUnrecoverable failureReconfigure the task from scratch after upgrading
Single incremental data change exceeds 256 MBUnrecoverable failureReconfigure the task
Long-running transactions in the source databaseWAL log accumulation, source disk may run out of spaceMonitor and close long-running transactions before sync

Other limits

  • The destination table cannot be an append-optimized (AO) table.

  • For this synchronization path, you can select only tables as the objects to be synchronized.

  • DTS does not synchronize the following data types: BIT, VARBIT, GEOMETRY, UUID, TSQUERY, TSVECTOR, and TXID_SNAPSHOT.

  • A single task synchronizes data from only one database. To synchronize multiple databases, create a separate task for each.

  • DTS cannot synchronize tables with inheritance relationships across schemas.

  • DDL operations can only be synchronized by tasks created after October 1, 2020.

  • Column mapping or schema mismatches between source and destination tables cause data in source-only columns to be lost.

Schema-level synchronization

If you synchronize an entire schema, run the following statement on any table you create or rename in that schema before writing data to it:

ALTER TABLE schema.table REPLICA IDENTITY FULL;

Replace schema and table with the actual schema and table names. Run this statement during off-peak hours, and avoid locking the table — locking can cause a deadlock.

Temporary objects created by DTS

DTS creates the following temporary tables in the source database to capture DDL statements, schema changes, and heartbeat information. Do not delete them while the task is running — the task will fail. DTS removes them automatically after the instance is released.

  • public.dts_pg_class

  • public.dts_pg_attribute

  • public.dts_pg_type

  • public.dts_pg_enum

  • public.dts_postgres_heartbeat

  • public.dts_ddl_command

  • public.dts_args_session

  • public.aliyun_dts_instance

DTS also creates a heartbeat table named dts_postgres_heartbeat to track synchronization latency.

Replication slot

DTS creates a replication slot with the prefix dts_sync_ in the source database. This slot provides access to incremental logs from the past 15 minutes.

The replication slot is automatically deleted when the DTS instance is released. If the slot is not deleted automatically (for example, because the source database password was changed, or DTS IP addresses were removed from the whitelist), delete it manually in the source database to prevent slot accumulation.

Amazon slot查询信息

If the task is released or fails, DTS automatically deletes the replication slot. After a primary/secondary switchover on the source PostgreSQL database, log on to the secondary database to delete the replication slot manually.

Writing from multiple sources

If data from other sources is written to the destination database during synchronization, data inconsistency may occur. For example, running online DDL statements via Data Management (DMS) while other sources write to the destination may cause data loss. To avoid this, use DTS as the only writer to the destination.

DTS task failure and recovery

If a task fails, DTS technical support attempts to restore it within 8 hours. During restoration, the task may be restarted and task parameters may be adjusted. Database parameters are not modified.

Source-specific limits

ApsaraDB RDS for PostgreSQL

  • Do not modify the endpoint or zone of the source instance while the task is running.

  • For primary/secondary switchovers, enable the Logical Replication Slot Failover feature to prevent logical subscription interruptions. See Logical Replication Slot Failover.

Self-managed PostgreSQL

  • Set max_wal_senders and max_replication_slots to values greater than the sum of existing replication slots and the number of DTS instances you plan to create.

  • Primary/secondary switchover on a self-managed PostgreSQL database causes the synchronization task to fail.

Google Cloud SQL for PostgreSQL

  • Set Database Account to an account with the cloudsqlsuperuser permission.

  • Select only objects that the specified account owns or is authorized to manage. Otherwise, grant the OWNER permission on those objects to the account.

An account with cloudsqlsuperuser cannot manage data owned by another cloudsqlsuperuser account.

SQL operations supported for incremental synchronization

Operation typeSupported statements
DMLINSERT, UPDATE, DELETE
DDLSee below

DDL synchronization applies to tasks created after October 1, 2020. For tasks created before May 12, 2023, set up triggers and functions in the source database first. See Use triggers and functions to implement incremental DDL migration for PostgreSQL databases.

When the source database account is a privileged account and the minor engine version of the ApsaraDB RDS for PostgreSQL instance is 20210228 or later, DTS can synchronize the following DDL statements. To update the minor engine version, see Update the minor engine version.

  • CREATE TABLE, DROP TABLE

  • ALTER TABLE: RENAME TABLE, ADD COLUMN, ADD COLUMN DEFAULT, ALTER COLUMN TYPE, DROP COLUMN, ADD CONSTRAINT, ADD CONSTRAINT CHECK, ALTER COLUMN DROP DEFAULT

  • TRUNCATE TABLE (source self-managed PostgreSQL version 11 or later)

  • CREATE INDEX ON TABLE

DDL synchronization limitations:

  • CASCADE and RESTRICT clauses are not synchronized.

  • DDL statements from sessions that run SET session_replication_role = replica are not synchronized.

  • DDL statements executed inside functions are not synchronized.

  • If a single commit contains both DML and DDL statements, the DDL statements are not synchronized.

  • If a single commit contains DDL statements for objects not in the synchronization scope, those DDL statements are not synchronized.

Important

BIT type data cannot be synchronized during incremental data synchronization.

Required account permissions

InstanceRequired permissionsReference
Source ApsaraDB RDS for PostgreSQLPrivileged account that owns the databaseCreate an account and Create a database
Destination AnalyticDB for PostgreSQLRead and write permissions on the destination database (or initial account, or account with RDS_SUPERUSER)Create a database account and Manage users and permissions

Create a synchronization task

Important

Full data synchronization increases the load on both the source and destination database servers. Run the task during off-peak hours. After initial full data synchronization completes, the destination tablespace may be larger than the source due to fragmentation from concurrent INSERT operations.

Step 1: Go to the Data Synchronization Tasks page

  1. Log on to the Data Management (DMS) console.Data Synchronization Tasks page of the new DTS console

  2. In the top navigation bar, click Data + AI.

  3. In the left-side navigation pane, choose DTS (DTS) > Data Synchronization.

Navigation may vary based on the DMS console mode. See Simple mode and Customize the layout and style of the DMS console for details. Alternatively, go directly to the Data Synchronization Tasks page in the DTS console.

Step 2: Select the region

On the right side of Data Synchronization Tasks, select the region where the synchronization instance resides.

In the new DTS console, select the region in the top navigation bar.

Step 3: Create a task

Click Create Task to open the task configuration page.

If the New Configuration Page button appears in the upper-right corner, click it to switch to the new configuration UI. Skip this step if Back to Previous Version is shown instead.

Parameters may differ slightly between the new and previous configuration pages. Use the new version.

Step 4: Configure source and destination databases

SectionParameterDescription
Task NameEnter a descriptive name. DTS generates a default name, but a meaningful name makes the task easier to identify. Unique names are not required.
Source DatabaseSelect a DMS database instance.Select an existing registered database, or configure the connection manually. To register a database, see Register an Alibaba Cloud database instance or Register a database hosted on a third-party cloud service or a self-managed database.
Database TypeSelect PostgreSQL.
Access MethodSelect Alibaba Cloud Instance.
Instance RegionSelect the region of the source ApsaraDB RDS for PostgreSQL instance.
Replicate Data Across Alibaba Cloud AccountsSelect No if the source is in the same Alibaba Cloud account.
Instance IDSelect the source ApsaraDB RDS for PostgreSQL instance.
Database NameEnter the name of the source database to synchronize.
Database AccountEnter the database account. See Required account permissions.
Database PasswordEnter the password for the database account.
EncryptionSelect Non-encrypted or SSL-encrypted. For SSL, upload CA Certificate, and optionally Client Certificate and Private Key of Client Certificate. For ApsaraDB RDS for PostgreSQL SSL configuration, see SSL encryption.
Destination DatabaseSelect a DMS database instance.Select an existing registered database, or configure the connection manually.
Database TypeSelect AnalyticDB for PostgreSQL.
Access MethodSelect Alibaba Cloud Instance.
Instance RegionSelect the region of the destination AnalyticDB for PostgreSQL instance.
Instance IDSelect the destination AnalyticDB for PostgreSQL instance.
Database NameEnter the name of the destination database.
Database AccountEnter the database account. See Required account permissions.
Database PasswordEnter the password for the database account.

Step 5: Test connectivity

Click Test Connectivity and Proceed at the bottom of the page.

For Alibaba Cloud database instances, DTS automatically adds its CIDR blocks to the instance whitelist. For self-managed databases hosted on ECS, DTS adds CIDR blocks to the ECS security group rules — if the database spans multiple ECS instances, add the CIDR blocks manually to each. For on-premises or third-party cloud databases, add the DTS CIDR blocks manually. See CIDR blocks of DTS servers.

Warning

Adding DTS CIDR blocks to whitelists or security groups introduces security exposure. Before proceeding, take preventive measures such as strengthening credentials, limiting exposed ports, auditing API calls, and reviewing whitelist rules regularly. Consider connecting via Express Connect, VPN Gateway, or Smart Access Gateway for additional security.

Step 6: Configure synchronization objects

Select objects

ParameterDescription
Synchronization TypesSelect Schema Synchronization, Full Data Synchronization, and Incremental Data Synchronization. Full and schema synchronization must complete before incremental synchronization begins.
DDL and DML Operations to Be SynchronizedSelect the operations to synchronize. For per-table operation selection, right-click an object in Selected Objects and choose the operations. See SQL operations supported for incremental synchronization.
Processing Mode of Conflicting TablesPrecheck and Report Errors: fails the precheck if tables with identical names exist in source and destination. Use object name mapping if destination tables cannot be renamed or deleted. Ignore Errors and Proceed: skips the check. During full sync, existing destination records with matching primary or unique keys are retained; during incremental sync, they are overwritten. Use with caution.
Capitalization of Object Names in Destination InstanceControls capitalization of database, table, and column names. Default is DTS default policy. See Specify the capitalization of object names in the destination instance.
Source ObjectsSelect objects from the Source Objects section and click 向右 to add them to Selected Objects. Columns, tables, or schemas can be selected.
Selected ObjectsTo rename a single object, right-click it. See Map the name of a single object. To rename multiple objects at once, click Batch Edit. See Map multiple object names at a time. To filter rows by condition, right-click a table and specify conditions. See Set filter conditions.
Object name mapping may cause dependent objects to fail synchronization.

Configure advanced settings

Click Next: Advanced Settings, then configure the following:

ParameterDescription
Dedicated Cluster for Task SchedulingBy default, DTS schedules tasks to the shared cluster. For higher stability, purchase a dedicated cluster. See What is a DTS dedicated cluster.
Retry Time for Failed ConnectionsThe time DTS retries after a connection failure. Valid values: 10–1440 minutes. Default: 720 minutes. Set to a value greater than 30. If different tasks share the same source or destination, the shortest retry time takes effect. During retry, charges apply.
Retry Time for Other IssuesThe time DTS retries after DDL or DML failures. Valid values: 1–1440 minutes. Default: 10 minutes. Set to a value greater than 10. Must be less than Retry Time for Failed Connections.
Enable Throttling for Full Data SynchronizationLimit the load on source and destination servers during full synchronization by setting Queries per second (QPS) to the source database, RPS of Full Data Migration, and Data migration speed for full migration (MB/s). Available only when Full Data Synchronization is selected.
Enable Throttling for Incremental Data SynchronizationSet RPS of Incremental Data Synchronization and Data synchronization speed for incremental synchronization (MB/s) to reduce load on the destination during incremental synchronization.
Environment TagOptional. Tag the DTS instance for organizational purposes.
Configure ETLSelect Yesalert notification settings to enable the extract, transform, and load (ETL) feature and enter data processing statements. Select No to skip. See What is ETL? and Configure ETL in a data migration or data synchronization task.
Monitoring and AlertingSelect Yes to receive alerts when the task fails or synchronization latency exceeds the threshold. Configure the alert threshold and notification settings. See Configure monitoring and alerting when you create a DTS task.

Configure data verification

Click Next Step: Data Verification to set up verification. See Configure a data verification task.

(Optional) Configure destination table fields

Click Next: Configure Database and Table Fields to set Type, Primary Key Column, and Distribution Key for each destination table.

This step is available only when Schema Synchronization is selected. Set Definition Status to All to view and edit all tables. For composite primary keys, one or more primary key columns must be designated as the Distribution Key. See Manage tables and Define table distribution.

Step 7: Run a precheck

Click Next: Save Task Settings and Precheck.

To preview the API parameters for this configuration, hover over the button and click Preview OpenAPI parameters before proceeding.

The task cannot start until the precheck passes.

If the precheck fails, click View Details next to each failed item, fix the issues, and click Precheck Again.

If an item shows an alert:

  • If the alert cannot be ignored, fix the issue and rerun the precheck.

  • If the alert can be ignored, click Confirm Alert Details, then in the View Details dialog click Ignore, click OK, and then click Precheck Again. Ignoring alerts may cause data inconsistency.

Step 8: Purchase the synchronization instance

Wait until Success Rate reaches 100%, then click Next: Purchase Instance.

On the buy page, configure the following:

ParameterDescription
Billing MethodSubscription: pay upfront, suitable for long-term use. Pay-as-you-go: billed hourly, suitable for short-term use. Release pay-as-you-go instances when no longer needed to avoid ongoing charges.
Resource Group SettingsSelect the resource group for the instance. Default: default resource group. See What is Resource Management?
Instance ClassSelect based on your required synchronization throughput. See Instance classes of data synchronization instances.
Subscription DurationAvailable only for the Subscription billing method. Options: 1–9 months, or 1, 2, 3, or 5 years.

Read and accept the Data Transmission Service (Pay-as-you-go) Service Terms, then click Buy and Start. In the confirmation dialog, click OK.

The task appears in the task list. Monitor its progress there.

What's next