Data Transmission Service (DTS) supports one-way data synchronization between ApsaraDB RDS for PostgreSQL instances. Use this to replicate data across instances for disaster recovery, read scaling, or real-time analytics.
In this topic, you will:
Review prerequisites and limitations before configuring the task
Configure the source and destination databases
Select synchronization objects and configure advanced settings
Run the precheck and purchase a synchronization instance
Prerequisites
Before you begin, make sure that:
The source and destination ApsaraDB RDS for PostgreSQL instances are created. See Create an ApsaraDB RDS for PostgreSQL instance.
For supported source and destination database versions, see Overview of data synchronization scenarios.
The destination database version is the same as or later than the source database version. Version mismatches may cause compatibility issues.
The available storage space of the destination instance is larger than the total data size of the source instance.
The
wal_levelparameter of the source database is set tological.Write-ahead logging (WAL) log retention meets the following requirements: If the retention period is too short, DTS may fail to obtain WAL logs, which causes task failure or data loss. In exceptional circumstances, data inconsistency or loss may occur. After full data synchronization completes, you can reduce the retention period to more than 24 hours. Make sure that you set the retention period of WAL logs based on the preceding requirements. Otherwise, service reliability or performance in the Service Level Agreement (SLA) of DTS may not be guaranteed.
Synchronization mode Minimum WAL log retention Incremental synchronization only More than 24 hours Full data synchronization + incremental synchronization At least 7 days
Limitations
Review the following limitations before configuring the synchronization task.
Source database limitations
Tables must have PRIMARY KEY or UNIQUE constraints, and all fields must be unique. Otherwise, the destination database may contain duplicate records.
If the destination table was created outside DTS (that is, Schema Synchronization was not selected for Synchronization Types), the destination table must have the same PRIMARY KEY or NOT NULL UNIQUE constraints as the source table.
If you select individual tables and need to edit them in the destination (such as renaming tables or columns), a single task supports up to 5,000 tables. For more than 5,000 tables, configure multiple tasks or synchronize the entire database.
DTS cannot synchronize temporary tables, internal triggers, or internal procedures and functions written in C.
DTS can synchronize custom parameters of the COMPOSITE, ENUM, and RANGE types.
Tables must have PRIMARY KEY, FOREIGN KEY, UNIQUE, or CHECK constraints.
During schema synchronization and full data synchronization: Do not execute DDL statements that change database or table schemas. The task will fail.
If the source database has long-running transactions during incremental synchronization, WAL logs generated before those transactions commit may accumulate and exhaust disk space.
If the source database undergoes a major version upgrade while a synchronization task is running, the task fails and cannot be recovered. Reconfigure the task.
Primary/secondary switchover: Enable the Logical Replication Slot Failover feature on the source instance before performing a switchover. Without it, logical subscriptions are interrupted and the task fails.
Single data size limit: If a single incremental data change exceeds 256 MB, the synchronization instance fails and cannot be recovered. Reconfigure the task.
General limitations
A single synchronization task can only synchronize data from one database. Create a separate task for each database.
DTS cannot synchronize tables with inheritance relationships across schemas.
SERIAL data type: DTS automatically creates a sequence for SERIAL columns in the source database. If you select Schema Synchronization as a synchronization type, also select Sequence or entire schema synchronization. Otherwise, the task fails to run.
Schema-level synchronization: If you create a table or rename a table within the synchronized schema, run the following statement before writing data to the table to ensure data consistency:
ALTER TABLE schema.table REPLICA IDENTITY FULL;Replace
schemaandtablewith the actual names. Do not lock the table when running this statement — otherwise, a deadlock occurs. Run this statement during off-peak hours.DTS does not validate metadata such as sequences. Manually verify metadata validity.
Before switching workloads to the destination database: Update the starting value of sequences in the destination database. Sequences in the destination do not automatically continue from the maximum value of source sequences.
DTS creates the following temporary tables in the source database. Do not delete these tables during synchronization — they are automatically deleted after the DTS instance is released:
public.dts_pg_classpublic.dts_pg_attributepublic.dts_pg_typepublic.dts_pg_enumpublic.dts_postgres_heartbeatpublic.dts_ddl_commandpublic.dts_args_sessionpublic.aliyun_dts_instance
DTS adds a heartbeat table named
dts_postgres_heartbeatto the source database to track synchronization latency.DTS creates a replication slot prefixed with
dts_sync_in the source database to obtain incremental logs from the last 15 minutes.The replication slot is automatically deleted when the DTS instance is released. If you change the source database password or remove DTS IP addresses from the whitelist, the replication slot cannot be automatically deleted — delete it manually to prevent replication slot accumulation. After a primary/secondary switchover, log in to the secondary database to delete the replication slot.

Performance impact: Initial full data synchronization reads and writes resources from both databases, which increases server load. Schedule full data synchronization during off-peak hours. After full synchronization completes, the destination tablespace is larger than the source due to fragmentation from concurrent INSERT operations.
Online DDL during synchronization: If only DTS writes to the destination tables, you can use Data Management Service (DMS) to perform online DDL on source tables. See Change schemas without locking tables. If other sources also write to the destination database during synchronization, data inconsistency or data loss may occur.
Foreign keys, triggers, or event triggers: If the destination database account is a privileged or superuser account and the synchronized tables contain foreign keys, triggers, or event triggers, DTS temporarily sets
session_replication_roletoreplicaat the session level. If the account does not have sufficient permissions, setsession_replication_roletoreplicamanually. Cascade update or delete operations on the source during this time may cause data inconsistency. After the task is released, changesession_replication_roleback toorigin.Task failure recovery: If a DTS task fails, DTS technical support attempts to restore it within 8 hours. During restoration, the task may be restarted and task parameters (not database parameters) may be modified. For the parameters that may be changed, see Modify the parameters of a DTS instance.
Source-specific limitations
| Source database type | Additional limitation |
|---|---|
| ApsaraDB RDS for PostgreSQL | Do not modify the endpoint or zone of the source instance during synchronization. The task will fail. |
| Self-managed PostgreSQL | The max_wal_senders and max_replication_slots values must each be greater than the sum of the number of existing replication slots and the number of DTS instances you plan to create for this database. |
| Cloud SQL for PostgreSQL (Google Cloud Platform) | The database account must have the cloudsqlsuperuser permission. The account must own the selected objects, or you must grant the OWNER permission on those objects to the account. An account with cloudsqlsuperuser permission cannot manage data owned by other accounts that also have cloudsqlsuperuser permission. |
Billing
| Synchronization type | Cost |
|---|---|
| Schema synchronization and full data synchronization | Free |
| Incremental data synchronization | Charged. See Billing overview. |
Supported synchronization topologies
One-way one-to-one synchronization
One-way one-to-many synchronization
One-way cascade synchronization
One-way many-to-one synchronization
For details, see Synchronization topologies.
Supported objects
| Object type | Details |
|---|---|
| SCHEMA and TABLE | Includes PRIMARY KEY, UNIQUE KEY, FOREIGN KEY, built-in data types, and DEFAULT CONSTRAINT |
| Other objects | VIEW, PROCEDURE (PostgreSQL V11 or later), FUNCTION, RULE, SEQUENCE, EXTENSION, TRIGGER, AGGREGATE, INDEX, OPERATOR, DOMAIN |
SQL operations that can be synchronized
| Operation type | SQL statements |
|---|---|
| DML | INSERT, UPDATE, DELETE |
| DDL | Only tasks created after October 1, 2020 support DDL synchronization |
DDL synchronization requirements and limitations:
Tasks created before May 12, 2023 require triggers and functions in the source database to capture DDL information before you configure the data synchronization task. See Use triggers and functions to implement incremental DDL migration for PostgreSQL databases.
To use a data synchronization task created before May 12, 2023 to synchronize DDL operations, you must create triggers and functions in the source database to capture DDL information before you configure the data synchronization task. For more information, see Use triggers and functions to implement incremental DDL migration for PostgreSQL databases.
The BIT data type cannot be synchronized during incremental data synchronization.
If the source database account is a privileged account and the minor engine version of the ApsaraDB RDS for PostgreSQL instance is 20210228 or later, DTS supports the following DDL statements. To update the minor engine version, see Update the minor engine version:
CREATE TABLE, DROP TABLE
ALTER TABLE: RENAME TABLE, ADD COLUMN, ADD COLUMN DEFAULT, ALTER COLUMN TYPE, DROP COLUMN, ADD CONSTRAINT, ADD CONSTRAINT CHECK, ALTER COLUMN DROP DEFAULT
TRUNCATE TABLE (requires self-managed PostgreSQL version 11 or later)
CREATE INDEX ON TABLE
Important- CASCADE and RESTRICT cannot be synchronized as part of DDL statements. - DDL statements from sessions that execute
SET session_replication_role = replicacannot be synchronized. - DDL statements executed by invoking functions cannot be synchronized. - If a commit contains both DML and DDL statements, the DDL statements are not synchronized. - If a commit contains DDL for objects outside the synchronization scope, those DDL statements are not synchronized.
Configure the synchronization task
Step 1: Open the Data Synchronization page
Use one of the following methods.
DTS console
Log in to the DTS console.DTS console
In the left navigation pane, click Data Synchronization.
In the upper-left corner, select the region where the synchronization instance will reside.
DMS console
The exact navigation path may vary based on the DMS console mode and layout. See Simple mode and Customize the layout and style of the DMS console.
Log in to the DMS console.DMS console
In the top navigation bar, move the pointer over Data + AI and choose DTS (DTS) > Data Synchronization.
From the drop-down list to the right of Data Synchronization Tasks, select the region where the synchronization instance will reside.
Step 2: Create a task
Click Create Task.
Step 3: Configure source and destination databases
After configuring the source and destination databases, read the Limits displayed on the page. Skipping this may cause task failure or data inconsistency.
General settings
| Parameter | Description |
|---|---|
| Task Name | DTS generates a name automatically. Specify a descriptive name to identify the task. The name does not need to be unique. |
Source database
| Parameter | Description |
|---|---|
| Database Type | Select PostgreSQL. |
| Connection Type | Select Alibaba Cloud Instance. |
| Instance Region | Select the region where the source instance resides. |
| Replicate Data Across Alibaba Cloud Accounts | Select No if using the current Alibaba Cloud account. |
| Instance ID | Select the source ApsaraDB RDS for PostgreSQL instance. |
| Database Name | Enter the name of the source database. |
| Database Account | Enter a privileged account that owns the database. See Create an account and Create a database. Important For ApsaraDB RDS for PostgreSQL V9.4 with DML-only synchronization, only the REPLICATION permission is required. |
| Database Password | Enter the password for the database account. |
| Encryption | Select Non-encrypted for this example. To use SSL encryption, select SSL-encrypted and upload the CA Certificate. If using a client certificate, also upload the Client Certificate and Private Key of Client Certificate, and specify the Private Key Password of Client Certificate. For SSL configuration on an ApsaraDB RDS instance, see SSL encryption. |
Destination database
| Parameter | Description |
|---|---|
| Database Type | Select PostgreSQL. |
| Connection Type | Select Alibaba Cloud Instance. |
| Instance Region | Select the region where the destination instance resides. |
| Instance ID | Select the destination ApsaraDB RDS for PostgreSQL instance. |
| Database Name | Enter the name of the destination database. |
| Database Account | Enter an account with owner permissions on schemas. See Create an account. |
| Database Password | Enter the password for the database account. |
| Encryption | Same SSL options as the source database. |
Step 4: Test connectivity
Click Test Connectivity and Proceed.
Make sure the CIDR blocks of DTS servers are added to the security settings of both source and destination databases. See Add the CIDR blocks of DTS servers. For self-managed databases with access methods other than Alibaba Cloud Instance, click Test Connectivity in the CIDR Blocks of DTS Servers dialog box.
Step 5: Configure synchronization objects
Configure Objects step
| Parameter | Description |
|---|---|
| Synchronization Types | By default, Incremental Data Synchronization is selected. Also select Schema Synchronization and Full Data Synchronization. After the precheck, DTS synchronizes historical data from the source to the destination as the basis for incremental synchronization. Note Selecting Schema Synchronization includes foreign keys in the synchronized schemas. |
| Synchronization Topology | Select One-way Synchronization. |
| Processing Mode of Conflicting Tables | Precheck and Report Errors: Checks whether the destination database contains tables with the same names as the source. The precheck fails if conflicts exist. To resolve naming conflicts without deleting or renaming destination tables, use object name mapping. Ignore Errors and Proceed: Skips the conflict check. During full synchronization, existing records in the destination are retained. During incremental synchronization, existing records are overwritten. If schemas differ, the task may fail or synchronize only some columns. Use with caution. |
| Capitalization of object names in destination instance | Default: DTS default policy. See Specify the capitalization of object names. |
| Source Objects | Select schemas or tables and click the icon to move them to Selected Objects. If you select tables, DTS does not synchronize other objects such as views, triggers, or stored procedures. If a table contains the SERIAL data type and Schema Synchronization is selected, also select Sequence or entire schema synchronization. |
| Selected Objects | Right-click an object to rename it (single object), or click Batch Edit to rename multiple objects. Right-click an object to select specific SQL operations or set WHERE filter conditions for data filtering. Note If you use object name mapping, other objects that depend on the renamed object may fail to synchronize. |
Click Next: Advanced Settings.
Advanced settings
| Parameter | Description |
|---|---|
| Dedicated Cluster for Task Scheduling | By default, DTS schedules the task to the shared cluster. For improved task stability, purchase a dedicated cluster. See What is a DTS dedicated cluster. |
| Retry Time for Failed Connections | The time range during which DTS retries failed connections. Valid values: 10–1440 minutes. Default: 720 minutes. Set this to more than 30 minutes. If multiple tasks share a source or destination database, the shortest retry time among those tasks takes effect. DTS charges for the instance during retries. |
| Retry Time for Other Issues | The time range during which DTS retries failed DDL or DML operations. Valid values: 1–1440 minutes. Default: 10 minutes. Set this to more than 10 minutes and less than Retry Time for Failed Connections. |
| Enable Throttling for Full Data Synchronization | Limits read/write throughput during full synchronization to reduce database server load. Configure Queries per second (QPS) to the source database, RPS of Full Data Migration, and Data migration speed for full migration (MB/s). Available only when Full Data Synchronization is selected. |
| Enable Throttling for Incremental Data Synchronization | Limits throughput during incremental synchronization. Configure RPS of Incremental Data Synchronization and Data synchronization speed for incremental synchronization (MB/s). |
| Environment Tag | Optional. Tag the DTS instance for environment identification. |
| Configure ETL | Select Yesalert notification settings to enable the extract, transform, and load (ETL) feature and enter data processing statements. See What is ETL? and Configure ETL in a data migration or data synchronization task. Select No to skip. |
| Monitoring and Alerting | Select Yes to receive alert notifications when the task fails or synchronization latency exceeds the threshold. Configure the alert threshold and notification settings. See Configure monitoring and alerting. |
Click Next Step: Data Verification to configure data verification. See Configure a data verification task.
Step 6: Save task settings and run the precheck
To view the API parameters for this task configuration, hover over Next: Save Task Settings and Precheck and click Preview OpenAPI parameters.
Click Next: Save Task Settings and Precheck.
DTS runs a precheck before starting the synchronization task.
If the precheck fails, click View Details next to the failed item, resolve the issue, and click Precheck Again.
If an alert is triggered:
If the alert cannot be ignored, fix the issue and rerun the precheck.
If the alert can be ignored, click Confirm Alert Details, click Ignore, click OK, and then click Precheck Again. Ignoring alerts may cause data inconsistency.
Step 7: Purchase a synchronization instance
Wait for Success Rate to reach 100%, then click Next: Purchase Instance.
On the purchase page, configure the following parameters:
Parameter Description Billing Method Subscription: Pay upfront for a set duration. More cost-effective for long-term use. Pay-as-you-go: Billed hourly. Suitable for short-term use. Release the instance when no longer needed to avoid unnecessary charges. Resource Group Settings The resource group for the synchronization instance. Default: default resource group. See What is Resource Management? Instance Class The instance class determines synchronization speed. Select based on your throughput requirements. See Instance classes of data synchronization instances. Subscription Duration Available for the Subscription billing method only. Options: 1–9 months, 1 year, 2 years, 3 years, or 5 years. Read and select Data Transmission Service (Pay-as-you-go) Service Terms.
Click Buy and Start, then click OK.
The task appears in the task list. Monitor its progress from there.
What's next
Synchronization topologies — Learn about supported one-way and two-way synchronization patterns.
Map object names — Rename objects in the destination instance.
Set filter conditions — Use SQL conditions to filter data during synchronization.
Configure a data verification task — Verify that source and destination data are consistent.