Use Data Transmission Service (DTS) to synchronize data from an ApsaraDB RDS for PostgreSQL instance to an AnalyticDB for PostgreSQL instance. DTS supports schema synchronization, full data synchronization, and incremental data synchronization, so your analytical warehouse stays continuously updated with minimal manual effort.
Prerequisites
Before you begin, make sure you have:
A destination AnalyticDB for PostgreSQL instance with available storage space greater than the total size of data in the source ApsaraDB RDS for PostgreSQL instance. For more information, see Create an instance.
A database created in the destination instance to receive the synchronized data. For more information, see the CREATE DATABASE section in SQL syntax.
Billing
| Synchronization type | Fee |
|---|---|
| Schema synchronization and full data synchronization | Free |
| Incremental data synchronization | Charged. See Billing overview. |
Limitations
Source database requirements
Table structure
Tables to be synchronized must have PRIMARY KEY or UNIQUE constraints, and all fields must be unique. Otherwise, the destination database may contain duplicate records.
If you create a synchronization task without selecting Schema Synchronization, make sure the destination table has the same PRIMARY KEY or NOT NULL UNIQUE constraints as the source table.
If you select individual tables as the synchronization scope and plan to rename tables or columns in the destination, a single task can handle at most 5,000 tables. For more than 5,000 tables, configure multiple tasks, or synchronize the entire database instead.
WAL log requirements
| Parameter or requirement | Value | Notes |
|---|---|---|
wal_level | Must be set to logical | Required for logical replication |
| WAL log retention (incremental sync only) | More than 24 hours | Insufficient retention may cause task failure or data loss |
| WAL log retention (full + incremental sync) | At least 7 days | After full sync completes, you can set the retention period to more than 24 hours |
Operations that cause task failure
| Operation | Impact | Mitigation |
|---|---|---|
| DDL statements during schema or full data synchronization | Task fails | Perform DDL changes before or after the sync window |
| Major version upgrade on the source database | Unrecoverable failure | Reconfigure the task from scratch after upgrading |
| Single incremental data change exceeds 256 MB | Unrecoverable failure | Reconfigure the task |
| Long-running transactions in the source database | WAL log accumulation, source disk may run out of space | Monitor and close long-running transactions before sync |
Other limits
The destination table cannot be an append-optimized (AO) table.
For this synchronization path, you can select only tables as the objects to be synchronized.
DTS does not synchronize the following data types: BIT, VARBIT, GEOMETRY, UUID, TSQUERY, TSVECTOR, and TXID_SNAPSHOT.
A single task synchronizes data from only one database. To synchronize multiple databases, create a separate task for each.
DTS cannot synchronize tables with inheritance relationships across schemas.
DDL operations can only be synchronized by tasks created after October 1, 2020.
Column mapping or schema mismatches between source and destination tables cause data in source-only columns to be lost.
Schema-level synchronization
If you synchronize an entire schema, run the following statement on any table you create or rename in that schema before writing data to it:
ALTER TABLE schema.table REPLICA IDENTITY FULL;Replace schema and table with the actual schema and table names. Run this statement during off-peak hours, and avoid locking the table — locking can cause a deadlock.
Temporary objects created by DTS
DTS creates the following temporary tables in the source database to capture DDL statements, schema changes, and heartbeat information. Do not delete them while the task is running — the task will fail. DTS removes them automatically after the instance is released.
public.dts_pg_classpublic.dts_pg_attributepublic.dts_pg_typepublic.dts_pg_enumpublic.dts_postgres_heartbeatpublic.dts_ddl_commandpublic.dts_args_sessionpublic.aliyun_dts_instance
DTS also creates a heartbeat table named dts_postgres_heartbeat to track synchronization latency.
Replication slot
DTS creates a replication slot with the prefix dts_sync_ in the source database. This slot provides access to incremental logs from the past 15 minutes.
The replication slot is automatically deleted when the DTS instance is released. If the slot is not deleted automatically (for example, because the source database password was changed, or DTS IP addresses were removed from the whitelist), delete it manually in the source database to prevent slot accumulation.

If the task is released or fails, DTS automatically deletes the replication slot. After a primary/secondary switchover on the source PostgreSQL database, log on to the secondary database to delete the replication slot manually.
Writing from multiple sources
If data from other sources is written to the destination database during synchronization, data inconsistency may occur. For example, running online DDL statements via Data Management (DMS) while other sources write to the destination may cause data loss. To avoid this, use DTS as the only writer to the destination.
DTS task failure and recovery
If a task fails, DTS technical support attempts to restore it within 8 hours. During restoration, the task may be restarted and task parameters may be adjusted. Database parameters are not modified.
Source-specific limits
ApsaraDB RDS for PostgreSQL
Do not modify the endpoint or zone of the source instance while the task is running.
For primary/secondary switchovers, enable the Logical Replication Slot Failover feature to prevent logical subscription interruptions. See Logical Replication Slot Failover.
Self-managed PostgreSQL
Set
max_wal_sendersandmax_replication_slotsto values greater than the sum of existing replication slots and the number of DTS instances you plan to create.Primary/secondary switchover on a self-managed PostgreSQL database causes the synchronization task to fail.
Google Cloud SQL for PostgreSQL
Set Database Account to an account with the
cloudsqlsuperuserpermission.Select only objects that the specified account owns or is authorized to manage. Otherwise, grant the OWNER permission on those objects to the account.
An account withcloudsqlsuperusercannot manage data owned by anothercloudsqlsuperuseraccount.
SQL operations supported for incremental synchronization
| Operation type | Supported statements |
|---|---|
| DML | INSERT, UPDATE, DELETE |
| DDL | See below |
DDL synchronization applies to tasks created after October 1, 2020. For tasks created before May 12, 2023, set up triggers and functions in the source database first. See Use triggers and functions to implement incremental DDL migration for PostgreSQL databases.
When the source database account is a privileged account and the minor engine version of the ApsaraDB RDS for PostgreSQL instance is 20210228 or later, DTS can synchronize the following DDL statements. To update the minor engine version, see Update the minor engine version.
CREATE TABLE, DROP TABLE
ALTER TABLE: RENAME TABLE, ADD COLUMN, ADD COLUMN DEFAULT, ALTER COLUMN TYPE, DROP COLUMN, ADD CONSTRAINT, ADD CONSTRAINT CHECK, ALTER COLUMN DROP DEFAULT
TRUNCATE TABLE (source self-managed PostgreSQL version 11 or later)
CREATE INDEX ON TABLE
DDL synchronization limitations:
CASCADE and RESTRICT clauses are not synchronized.
DDL statements from sessions that run
SET session_replication_role = replicaare not synchronized.DDL statements executed inside functions are not synchronized.
If a single commit contains both DML and DDL statements, the DDL statements are not synchronized.
If a single commit contains DDL statements for objects not in the synchronization scope, those DDL statements are not synchronized.
BIT type data cannot be synchronized during incremental data synchronization.
Required account permissions
| Instance | Required permissions | Reference |
|---|---|---|
| Source ApsaraDB RDS for PostgreSQL | Privileged account that owns the database | Create an account and Create a database |
| Destination AnalyticDB for PostgreSQL | Read and write permissions on the destination database (or initial account, or account with RDS_SUPERUSER) | Create a database account and Manage users and permissions |
Create a synchronization task
Full data synchronization increases the load on both the source and destination database servers. Run the task during off-peak hours. After initial full data synchronization completes, the destination tablespace may be larger than the source due to fragmentation from concurrent INSERT operations.
Step 1: Go to the Data Synchronization Tasks page
Log on to the Data Management (DMS) console.Data Synchronization Tasks page of the new DTS console
In the top navigation bar, click Data + AI.
In the left-side navigation pane, choose DTS (DTS) > Data Synchronization.
Navigation may vary based on the DMS console mode. See Simple mode and Customize the layout and style of the DMS console for details. Alternatively, go directly to the Data Synchronization Tasks page in the DTS console.
Step 2: Select the region
On the right side of Data Synchronization Tasks, select the region where the synchronization instance resides.
In the new DTS console, select the region in the top navigation bar.
Step 3: Create a task
Click Create Task to open the task configuration page.
If the New Configuration Page button appears in the upper-right corner, click it to switch to the new configuration UI. Skip this step if Back to Previous Version is shown instead.
Parameters may differ slightly between the new and previous configuration pages. Use the new version.
Step 4: Configure source and destination databases
| Section | Parameter | Description |
|---|---|---|
| — | Task Name | Enter a descriptive name. DTS generates a default name, but a meaningful name makes the task easier to identify. Unique names are not required. |
| Source Database | Select a DMS database instance. | Select an existing registered database, or configure the connection manually. To register a database, see Register an Alibaba Cloud database instance or Register a database hosted on a third-party cloud service or a self-managed database. |
| Database Type | Select PostgreSQL. | |
| Access Method | Select Alibaba Cloud Instance. | |
| Instance Region | Select the region of the source ApsaraDB RDS for PostgreSQL instance. | |
| Replicate Data Across Alibaba Cloud Accounts | Select No if the source is in the same Alibaba Cloud account. | |
| Instance ID | Select the source ApsaraDB RDS for PostgreSQL instance. | |
| Database Name | Enter the name of the source database to synchronize. | |
| Database Account | Enter the database account. See Required account permissions. | |
| Database Password | Enter the password for the database account. | |
| Encryption | Select Non-encrypted or SSL-encrypted. For SSL, upload CA Certificate, and optionally Client Certificate and Private Key of Client Certificate. For ApsaraDB RDS for PostgreSQL SSL configuration, see SSL encryption. | |
| Destination Database | Select a DMS database instance. | Select an existing registered database, or configure the connection manually. |
| Database Type | Select AnalyticDB for PostgreSQL. | |
| Access Method | Select Alibaba Cloud Instance. | |
| Instance Region | Select the region of the destination AnalyticDB for PostgreSQL instance. | |
| Instance ID | Select the destination AnalyticDB for PostgreSQL instance. | |
| Database Name | Enter the name of the destination database. | |
| Database Account | Enter the database account. See Required account permissions. | |
| Database Password | Enter the password for the database account. |
Step 5: Test connectivity
Click Test Connectivity and Proceed at the bottom of the page.
For Alibaba Cloud database instances, DTS automatically adds its CIDR blocks to the instance whitelist. For self-managed databases hosted on ECS, DTS adds CIDR blocks to the ECS security group rules — if the database spans multiple ECS instances, add the CIDR blocks manually to each. For on-premises or third-party cloud databases, add the DTS CIDR blocks manually. See CIDR blocks of DTS servers.
Adding DTS CIDR blocks to whitelists or security groups introduces security exposure. Before proceeding, take preventive measures such as strengthening credentials, limiting exposed ports, auditing API calls, and reviewing whitelist rules regularly. Consider connecting via Express Connect, VPN Gateway, or Smart Access Gateway for additional security.
Step 6: Configure synchronization objects
Select objects
| Parameter | Description |
|---|---|
| Synchronization Types | Select Schema Synchronization, Full Data Synchronization, and Incremental Data Synchronization. Full and schema synchronization must complete before incremental synchronization begins. |
| DDL and DML Operations to Be Synchronized | Select the operations to synchronize. For per-table operation selection, right-click an object in Selected Objects and choose the operations. See SQL operations supported for incremental synchronization. |
| Processing Mode of Conflicting Tables | Precheck and Report Errors: fails the precheck if tables with identical names exist in source and destination. Use object name mapping if destination tables cannot be renamed or deleted. Ignore Errors and Proceed: skips the check. During full sync, existing destination records with matching primary or unique keys are retained; during incremental sync, they are overwritten. Use with caution. |
| Capitalization of Object Names in Destination Instance | Controls capitalization of database, table, and column names. Default is DTS default policy. See Specify the capitalization of object names in the destination instance. |
| Source Objects | Select objects from the Source Objects section and click |
| Selected Objects | To rename a single object, right-click it. See Map the name of a single object. To rename multiple objects at once, click Batch Edit. See Map multiple object names at a time. To filter rows by condition, right-click a table and specify conditions. See Set filter conditions. |
Object name mapping may cause dependent objects to fail synchronization.
Configure advanced settings
Click Next: Advanced Settings, then configure the following:
| Parameter | Description |
|---|---|
| Dedicated Cluster for Task Scheduling | By default, DTS schedules tasks to the shared cluster. For higher stability, purchase a dedicated cluster. See What is a DTS dedicated cluster. |
| Retry Time for Failed Connections | The time DTS retries after a connection failure. Valid values: 10–1440 minutes. Default: 720 minutes. Set to a value greater than 30. If different tasks share the same source or destination, the shortest retry time takes effect. During retry, charges apply. |
| Retry Time for Other Issues | The time DTS retries after DDL or DML failures. Valid values: 1–1440 minutes. Default: 10 minutes. Set to a value greater than 10. Must be less than Retry Time for Failed Connections. |
| Enable Throttling for Full Data Synchronization | Limit the load on source and destination servers during full synchronization by setting Queries per second (QPS) to the source database, RPS of Full Data Migration, and Data migration speed for full migration (MB/s). Available only when Full Data Synchronization is selected. |
| Enable Throttling for Incremental Data Synchronization | Set RPS of Incremental Data Synchronization and Data synchronization speed for incremental synchronization (MB/s) to reduce load on the destination during incremental synchronization. |
| Environment Tag | Optional. Tag the DTS instance for organizational purposes. |
| Configure ETL | Select Yesalert notification settings to enable the extract, transform, and load (ETL) feature and enter data processing statements. Select No to skip. See What is ETL? and Configure ETL in a data migration or data synchronization task. |
| Monitoring and Alerting | Select Yes to receive alerts when the task fails or synchronization latency exceeds the threshold. Configure the alert threshold and notification settings. See Configure monitoring and alerting when you create a DTS task. |
Configure data verification
Click Next Step: Data Verification to set up verification. See Configure a data verification task.
(Optional) Configure destination table fields
Click Next: Configure Database and Table Fields to set Type, Primary Key Column, and Distribution Key for each destination table.
This step is available only when Schema Synchronization is selected. Set Definition Status to All to view and edit all tables. For composite primary keys, one or more primary key columns must be designated as the Distribution Key. See Manage tables and Define table distribution.
Step 7: Run a precheck
Click Next: Save Task Settings and Precheck.
To preview the API parameters for this configuration, hover over the button and click Preview OpenAPI parameters before proceeding.
The task cannot start until the precheck passes.
If the precheck fails, click View Details next to each failed item, fix the issues, and click Precheck Again.
If an item shows an alert:
If the alert cannot be ignored, fix the issue and rerun the precheck.
If the alert can be ignored, click Confirm Alert Details, then in the View Details dialog click Ignore, click OK, and then click Precheck Again. Ignoring alerts may cause data inconsistency.
Step 8: Purchase the synchronization instance
Wait until Success Rate reaches 100%, then click Next: Purchase Instance.
On the buy page, configure the following:
| Parameter | Description |
|---|---|
| Billing Method | Subscription: pay upfront, suitable for long-term use. Pay-as-you-go: billed hourly, suitable for short-term use. Release pay-as-you-go instances when no longer needed to avoid ongoing charges. |
| Resource Group Settings | Select the resource group for the instance. Default: default resource group. See What is Resource Management? |
| Instance Class | Select based on your required synchronization throughput. See Instance classes of data synchronization instances. |
| Subscription Duration | Available only for the Subscription billing method. Options: 1–9 months, or 1, 2, 3, or 5 years. |
Read and accept the Data Transmission Service (Pay-as-you-go) Service Terms, then click Buy and Start. In the confirmation dialog, click OK.
The task appears in the task list. Monitor its progress there.
What's next
To perform online DDL operations on the source tables without interrupting synchronization, use DMS — but only when DTS is the sole writer to the destination database. See Change schemas without locking tables.
To map object names between source and destination, see Map object names.
To modify task parameters after the task starts, see Modify the parameters of a DTS instance.