Use Data Transmission Service (DTS) to continuously replicate data from one PolarDB for PostgreSQL cluster to another in one direction. Common use cases include disaster recovery, read scaling, and data distribution across environments.
Prerequisites
Before you begin, make sure that:
-
Source and destination PolarDB for PostgreSQL clusters are created. For details, see Create a database cluster.
-
The
wal_levelparameter is set tologicalon both clusters. For details, see Set cluster parameters. -
The destination cluster has more available storage than the source cluster's used storage.
Billing
| Synchronization type | Pricing |
|---|---|
| Schema synchronization and full data synchronization | Free |
| Incremental data synchronization | Charged. See Billing overview. |
Supported synchronization topologies
-
One-way one-to-one synchronization
-
One-way one-to-many synchronization
-
One-way many-to-one synchronization
For descriptions and usage notes for each topology, see Synchronization topologies.
Supported synchronization objects
SCHEMA, TABLE
Includes: PRIMARY KEY, UNIQUE KEY, FOREIGN KEY, built-in data types, and DEFAULT CONSTRAINT.
Supported features vary depending on the destination database type. Check the console for the current list.
Supported SQL operations
| Operation type | SQL statements |
|---|---|
| DML | INSERT, UPDATE, DELETE |
| DDL | CREATE TABLE, DROP TABLE, ALTER TABLE (RENAME TABLE, ADD COLUMN, ADD COLUMN DEFAULT, ALTER COLUMN TYPE, DROP COLUMN, ADD CONSTRAINT, ADD CONSTRAINT CHECK, ALTER COLUMN DROP DEFAULT), TRUNCATE TABLE (source cluster must run PostgreSQL 11 or later), CREATE INDEX ON TABLE |
-
DDL synchronization is only available for tasks created after October 1, 2020.
-
For tasks created before May 12, 2023, create triggers and functions in the source database to capture DDL information before configuring the task. For details, see Use triggers and functions to implement incremental DDL migration for PostgreSQL.
-
If the source database account has privileged permissions, the synchronization task supports the DDL operations listed above.
-
Bit-type data is not supported during incremental data synchronization.
The following DDL statements are not synchronized:
-
Statements containing
CASCADEorRESTRICT -
Statements executed in sessions where
SET session_replication_role = replicais set -
Statements executed by calling functions
-
DDL statements in a transaction that also contains DML statements
-
DDL statements for objects not selected for synchronization
Limitations
During schema synchronization, DTS synchronizes foreign keys from the source database to the destination database.
During full data synchronization and incremental data synchronization, DTS temporarily disables constraint checks and foreign key cascade operations at the session level. Data inconsistency may occur if cascade update or delete operations are performed on the source database while the task is running.
Objects and operations not supported
-
A single synchronization task covers only one database. Configure a separate task for each additional database.
-
The following objects cannot be synchronized:
-
TimescaleDB extension tables
-
Tables with cross-schema inheritance
-
Tables with unique indexes based on expressions
-
Schemas created by installing plugins (these schemas also do not appear in the console during task configuration)
-
-
Tables to be synchronized must have a primary key or a non-null unique index.
Restrictions during synchronization
Do not perform the following operations while the synchronization task is running:
-
Do not write to the destination database from sources other than DTS. This causes data inconsistency between source and destination.
-
Do not run DDL operations that change database or table schemas during schema synchronization or full synchronization. The task fails if you do. During full synchronization, DTS queries the source database and creates metadata locks that can also block DDL operations on the source.
-
Do not delete DTS temporary tables from the source database. DTS creates the following tables to capture DDL statements, table structure, and heartbeat information. Deleting them during synchronization causes the task to fail. They are automatically removed after the DTS instance is released:
public.dts_pg_class,public.dts_pg_attribute,public.dts_pg_type,public.dts_pg_enum,public.dts_postgres_heartbeat,public.dts_ddl_command,public.dts_args_session,public.aliyun_dts_instance
Additional configuration notes
-
Logical Replication Slot Failover (required): The source PolarDB for PostgreSQL cluster must support and enable Logical Replication Slot Failover to prevent logical replication from being interrupted by a primary/secondary switchover.
ImportantIf the source cluster does not support Logical Replication Slot Failover (for example, if the Database Engine is PostgreSQL 14), a high-availability (HA) switchover in the source database may cause the synchronization instance to fail and become unrecoverable.
-
REPLICA IDENTITY FULL: Run
ALTER TABLE schema.table REPLICA IDENTITY FULL;on all tables to be synchronized in the source database before writing data to them in the following scenarios: Do not lock tables while running this command to prevent deadlocks. Run this command during off-peak hours. If you skip the related precheck items, DTS runs this command automatically during instance initialization.-
When the instance runs for the first time
-
When you select Schema as the granularity and a new table is created, or an existing table is rebuilt using RENAME
-
When you modify synchronization objects
-
-
SERIAL type fields: If a table contains a SERIAL type field and you select Schema Synchronization as one of the synchronization types, also select Sequence or synchronize the entire schema. Otherwise, the synchronization instance may fail.
-
Partitioned tables: Include both the parent table and all child partitions as synchronization objects. The parent table of a PostgreSQL partitioned table does not store data directly — all data is in the child partitions. Omitting child partitions causes data inconsistency.
-
Replication slot: DTS creates a replication slot with the
dts_sync_prefix in the source database to retain incremental logs for the last 15 minutes. DTS automatically clears the slot when the synchronization task fails or the instance is released.If you change the password of the source database account or remove the DTS IP address from the source database whitelist during synchronization, the replication slot cannot be cleared automatically. Manually clear the replication slot in the source database to prevent it from accumulating and consuming disk space. If a failover occurs in the source database, log on to the secondary database to manually clear the slot.

-
WAL accumulation: Long-running transactions can cause write-ahead log (WAL) accumulation in the source database, which may exhaust disk space.
-
Large incremental changes: If a single incremental change exceeds 256 MB, the synchronization instance fails and cannot be recovered. Reconfigure the synchronization instance in this case.
-
Sequences: DTS validates data content but not metadata such as sequences. Validate sequences manually. After you switch your workload to the destination instance, new sequences do not increment from the maximum value of the source sequence. Update the sequence value in the destination database before the switchover. For details, see Update the sequence value in the destination database.
-
Foreign keys and session_replication_role: For a full or incremental synchronization task, if the tables to be synchronized contain foreign keys, triggers, or event triggers, DTS temporarily sets
session_replication_roletoreplicaat the session level if the destination account has superuser or high-privilege permissions. If it does not, setsession_replication_roletoreplicamanually in the destination database before starting the task. During this period, data inconsistency may occur if cascade update or delete operations are performed on the source database. After the task is released, changesession_replication_roleback toorigin. -
Initial full data synchronization: Full synchronization runs concurrent INSERT operations, which causes table fragmentation in the destination. The destination table space may be larger than the source after full synchronization completes. To reduce load on both databases, run full synchronization during off-peak hours.
-
Task failure recovery: If a task fails, DTS support staff will attempt to restore it within eight hours. Restoration may involve restarting the task or adjusting task parameters (not database parameters). For the list of parameters that may be adjusted, see Modify instance parameters.
Set up a synchronization task
Step 1: Open the task configuration page
Go to the data synchronization task list in your target region using one of the following methods, then click Create Task.
DTS console
-
Log on to the DTS console.
-
In the left navigation pane, click Data Synchronization.
-
In the upper-left corner, select the region of the synchronization instance.
DMS console
The exact steps may vary depending on the DMS console mode and layout. For details, see Simple mode console and Customize the layout and style of the DMS console.
-
Log on to the DMS console.
-
In the top menu bar, choose .
-
To the right of Data Synchronization Tasks, select the region of the synchronization instance.
Step 2: Configure source and destination databases
On the task configuration page, fill in the following settings.
General
| Parameter | Description |
|---|---|
| Task Name | DTS generates a name automatically. Specify a descriptive name for easy identification. The name does not need to be unique. |
Source Database
| Parameter | Description |
|---|---|
| Select Existing Connection | Select a registered database instance from the list to auto-fill the fields below. If you have not registered the instance, configure the fields manually. In the DMS console, this field is labeled Select a DMS database instance. |
| Database Type | Select PolarDB for PostgreSQL. |
| Connection Type | Select Cloud instance. |
| Instance Region | Select the region of the source cluster. |
| Cross Alibaba Cloud account | Select Not across accounts if using the same Alibaba Cloud account. |
| Instance ID | Select the source PolarDB for PostgreSQL cluster ID. |
| Database name | Enter the name of the source database. |
| Database Account | Enter the database account for the source cluster. |
| Database Password | Enter the password for the database account. |
Destination Database
| Parameter | Description |
|---|---|
| Select Existing Connection | Select a registered database instance from the list to auto-fill the fields below. If you have not registered the instance, configure the fields manually. In the DMS console, this field is labeled Select a DMS database instance. |
| Database Type | Select PolarDB for PostgreSQL. |
| Connection Type | Select Cloud instance. |
| Instance Region | Select the region of the destination cluster. |
| Instance ID | Select the destination PolarDB for PostgreSQL cluster ID. |
| Database name | Enter the name of the destination database. |
| Database Account | Enter a high-privilege database account for the destination cluster. For details on creating and authorizing an account, see Create a database account. |
| Database Password | Enter the password for the database account. |
After completing the configuration, click Test Connectivity and Proceed at the bottom of the page.
Make sure that the DTS IP address blocks are added to the security settings of both databases. DTS can do this automatically, or you can add them manually. For details, see Add the IP address whitelist of DTS servers.
Step 3: Configure synchronization objects
On the Configure Objects page, specify what to synchronize.
| Parameter | Description |
|---|---|
| Synchronization Types | Incremental Data Synchronization is selected by default. Also select Schema Synchronization and Full Data Synchronization. DTS synchronizes historical data first as the baseline for incremental synchronization. |
| Synchronization Topology | Select One-way Synchronization. |
| Processing Mode of Conflicting Tables | Precheck and Report Errors (recommended): Checks for tables with the same name in the destination. If any exist, the precheck reports an error and the task does not start. Use this mode if you cannot rename conflicting tables; instead, map object names. See Database Table Column Name Mapping. <br><br> Ignore Errors and Proceed: Skips the check for same-name tables. <br><br> Warning
This option may cause data inconsistency. Specific behaviors:
|
| Capitalization of Object Names in Destination Instance | Configures the case-sensitivity policy for object names. The DTS default policy is selected by default. For details, see Case policy for destination object names. |
| Source Objects | In the Source Objects box, select objects and click |
| Selected Objects | To rename a single object in the destination, right-click it in the Selected Objects box. See Map a single object name. To rename multiple objects in bulk, click Batch Edit. See Map multiple object names in bulk. To select SQL operations to synchronize at the database or table level, right-click the object and select operations in the dialog box. To filter rows using a WHERE condition, right-click the table and set the condition. See Set filter conditions. |
Click Next: Advanced Settings and configure the following optional parameters.
| Parameter | Description |
|---|---|
| Dedicated Cluster for Task Scheduling | DTS uses a shared cluster by default. For greater task stability, purchase a dedicated cluster. See What is a DTS dedicated cluster? |
| Retry Time for Failed Connections | If a connection to the source or destination fails, DTS retries immediately. Default: 720 minutes. Range: 10–1,440 minutes. Set 30 minutes or more. If the connection is restored within the retry period, the task resumes automatically; otherwise, the task fails. Note
If multiple DTS instances share a source or destination, DTS uses the shortest configured retry duration across all instances. DTS charges for task runtime during retries. |
| Retry Time for Other Issues | If a non-connection issue occurs (such as a DDL or DML execution error), DTS retries immediately. Default: 10 minutes. Range: 1–1,440 minutes. Set 10 minutes or more. This value must be less than Retry Time for Failed Connections. |
| Enable Throttling for Full Data Synchronization | Limit the full synchronization rate to reduce load on the destination. Set Queries per second (QPS) to the source database, RPS of Full Data Migration, and Data migration speed for full migration (MB/s). Available only when Full Data Synchronization is selected. You can also adjust the rate after the task starts. |
| Enable Throttling for Incremental Data Synchronization | Limit the incremental synchronization rate by setting RPS of Incremental Data Synchronization and Data synchronization speed for incremental synchronization (MB/s). |
| Environment Tag | (Optional) Select a tag to identify the instance. |
| Configure ETL | Choose whether to enable extract, transform, and load (ETL). See What is ETL? Select Yesalert notifications to enable and enter data processing statements. See Configure ETL in a data migration or data synchronization task. Select No to disable. |
| Monitoring and Alerting | Choose whether to configure alerts for task failures or latency exceeding a threshold. Select Yes to set alert thresholds and contacts. See Configure monitoring and alerting during task configuration. |
Click Data Verification to optionally configure a data verification task. See Configure data verification.
Step 4: Run the precheck
Click Next: Save Task Settings and Precheck.
To preview the API parameters for this configuration, hover over the button and click Preview OpenAPI parameters before proceeding.
DTS runs a precheck before starting the task.
-
If the precheck passes, proceed to purchase the instance.
-
If the precheck fails, click View Details next to the failed item, fix the issue, and rerun the precheck.
-
If the precheck generates warnings:
-
For non-ignorable warnings, click View Details, fix the issue, and rerun the precheck.
-
For ignorable warnings, click Confirm Alert Details > Ignore > OK, then click Precheck Again. Ignoring warnings may lead to data inconsistency. Proceed with caution.
-
Step 5: Purchase the instance
-
When the Success Rate reaches 100%, click Next: Purchase Instance.
-
On the Purchase page, select billing and instance settings.
Parameter Description Billing Method Subscription: Pay upfront for a fixed duration. Cost-effective for long-term, continuous tasks. Pay-as-you-go: Billed hourly for actual usage. Suitable for short-term or test tasks. Resource Group Settings The resource group for the instance. Default: default resource group. See What is Resource Management? Instance Class Select a specification based on your performance requirements. Different classes affect synchronization throughput. See Data synchronization link specifications. Subscription Duration (Subscription only) Select the duration: 1–9 months, or 1, 2, 3, or 5 years. -
Read and select the Data Transmission Service (Pay-as-you-go) Service Terms checkbox.
-
Click Buy and Start, then click OK in the confirmation dialog box.
Monitor the task on the data synchronization page.
What's next
-
To update the sequence value in the destination database before switching over your workload, see Update the sequence value in the destination database.
-
To verify data consistency after synchronization, see Configure data verification.