Use Data Transmission Service (DTS) to continuously synchronize data from an ApsaraDB RDS for PostgreSQL instance to a PolarDB for PostgreSQL cluster. This supports use cases such as migration preparation, read/write separation, and disaster recovery.
Prerequisites
Before you begin, ensure that you have:
-
A destination PolarDB for PostgreSQL cluster with available storage larger than the occupied storage of the source ApsaraDB RDS for PostgreSQL instance. For more information, see Create a database cluster.
-
A database created in the destination PolarDB for PostgreSQL cluster to receive data. For more information, see Database management.
For supported source and destination database versions, see Overview of data synchronization solutions.
Billing
| Synchronization type | Cost |
|---|---|
| Schema synchronization and full data synchronization | Free of charge |
| Incremental synchronization | Charged. For details, see Billing overview. |
Supported synchronization topologies
-
One-way one-to-one synchronization
-
One-way one-to-many synchronization
-
One-way cascade synchronization
-
One-way many-to-one synchronization
For topology details and usage notes, see Synchronization topologies.
Supported synchronization objects
| Object type | Supported objects |
|---|---|
| Schema and table | SCHEMA, TABLE (including PRIMARY KEY, UNIQUE KEY, FOREIGN KEY, built-in data types, and DEFAULT CONSTRAINT) |
| Other database objects | VIEW, PROCEDURE (PostgreSQL 11 or later), FUNCTION, RULE, SEQUENCE, EXTENSION, TRIGGER, AGGREGATE, INDEX, OPERATOR, DOMAIN |
SQL operations that can be synchronized
| Operation type | SQL statements |
|---|---|
| DML (Data Manipulation Language) | INSERT, UPDATE, DELETE |
| DDL (Data Definition Language) | See details below |
DDL synchronization details
DDL operations can be synchronized only by data synchronization tasks created after a certain date. For tasks created before May 12, 2023 (Singapore Time), you must manually create triggers and functions in the source database before configuring the task. For more information, see Use triggers and functions to implement incremental DDL migration for PostgreSQL databases.
DDL synchronization requires a privileged account on the source database and minor engine version 20210228 or later. For information about updating the minor engine version, see Update the minor engine version.
Supported DDL statements:
-
CREATE TABLE,DROP TABLE -
ALTER TABLE(RENAME TABLE,ADD COLUMN,ADD COLUMN DEFAULT,ALTER COLUMN TYPE,DROP COLUMN,ADD CONSTRAINT,ADD CONSTRAINT CHECK,ALTER COLUMN DROP DEFAULT) -
TRUNCATE TABLE(source PostgreSQL 11 or later) -
CREATE INDEX ON TABLE
DDL synchronization is not supported in the following cases:
-
Additional DDL modifiers such as
CASCADEorRESTRICT -
DDL from sessions running
SET session_replication_role = replica -
DDL executed by calling a
FUNCTIONor through other indirect means -
DDL via Server Programming Interface (SPI)
-
Commits containing both DML and DDL statements — the DDL is not synchronized
-
DDL for objects not included in the synchronization scope
-
For data synchronization tasks created before May 12, 2023 (Singapore Time), you must create triggers and functions in the source database to capture DDL information before you configure the task. For more information, see Use triggers and functions to implement incremental DDL migration for PostgreSQL databases.
-
BITtype data during incremental data synchronization
Permissions required for database accounts
| Database | Required permissions | How to create and authorize |
|---|---|---|
| ApsaraDB RDS for PostgreSQL | A privileged account that owns the selected database. Exception: For RDS for PostgreSQL 9.4 with DML-only synchronization, the REPLICATION permission is sufficient. |
Create an account and Create a database |
| PolarDB for PostgreSQL | Owner permission on the destination schema. You can use the database owner specified when creating the database. | Create a database account and Database management |
Limitations
Source database requirements
| Requirement | Details |
|---|---|
| Primary key or UNIQUE constraint | Tables must have a primary key or UNIQUE constraint with unique fields. Without one, duplicate data may appear in the destination. If the target table is not created by DTS (schema synchronization not selected), the table must have the same primary key or non-null UNIQUE constraint as the source table. |
| Database name | Cannot contain hyphens (-). For example, dts-testdata is invalid. |
| Table count per task | For table-level sync with column mapping, split tasks if a single task exceeds 5,000 tables. Alternatively, configure the task to synchronize the entire database. |
| WAL configuration | See WAL configuration requirements below. |
| DDL during sync | Do not run DDL operations that change the database or table schema during initial schema synchronization or initial full data synchronization. DTS queries the source database during initial full data synchronization, creating metadata locks that may block DDL operations. |
| Major version upgrade | Do not perform a major version upgrade on the source database while the sync instance is running. The instance fails and cannot be recovered — you must reconfigure it. |
| Long-running transactions | Long-running transactions prevent Write-Ahead Logging (WAL) logs before the transaction commits from being cleared. This can cause WAL log accumulation and insufficient disk space. |
| Single incremental data size | If a single piece of data exceeds 256 MB after an incremental change, the sync instance fails and cannot be recovered — you must reconfigure it. |
WAL configuration requirements
| Parameter | Requirement |
|---|---|
wal_level |
Set to logical to enable logical replication. |
| WAL log retention (incremental sync only) | Retain WAL logs for more than 24 hours. |
| WAL log retention (full + incremental sync) | Retain WAL logs for at least 7 days. You can reduce the retention period to more than 24 hours after full synchronization completes. |
If the task fails because DTS cannot obtain WAL logs, or if data inconsistency or loss occurs due to insufficient log retention, the issue is not covered by the DTS Service-Level Agreement (SLA).
Source-type-specific requirements
| Source type | Requirement |
|---|---|
| ApsaraDB RDS for PostgreSQL | Do not modify the connection address or zone of the source instance during synchronization. |
| Self-managed PostgreSQL | The values of max_wal_senders and max_replication_slots must both be greater than the sum of currently used replication slots and the number of DTS instances to be created using this database. |
| Google Cloud SQL for PostgreSQL | The Database Account must have cloudsqlsuperuser permissions. Select only objects that this account can manage, or grant the Owner permission for those objects to the account (for example, using GRANT <Owner of the object to be synced> TO <source database account used by the task>). An account with cloudsqlsuperuser permissions cannot manage data owned by another cloudsqlsuperuser account. |
Other limitations
-
One database per task: A single data synchronization task can synchronize only one database. Configure a separate task for each database.
-
Unsupported table types: TimescaleDB extension tables and tables with cross-schema inheritance are not supported.
-
SERIAL type fields: If a table contains SERIAL type fields, the source database automatically creates a Sequence. When Schema Synchronization is selected, also select Sequence or synchronize the entire schema to avoid task failure.
-
Foreign keys, triggers, and event triggers: For full or incremental sync tasks involving tables with foreign keys, triggers, or event triggers, DTS sets
session_replication_roletoreplicaat the session level when the destination account is a privileged account. If the destination account lacks this permission, manually setsession_replication_roletoreplicain the destination database. Cascade update or delete operations on the source database during this period may cause data inconsistency. After the DTS task is released, resetsession_replication_roletoorigin. -
REPLICA IDENTITY FULL requirement: Run
ALTER TABLE schema.table REPLICA IDENTITY FULL;on the tables to be synchronized before writing data to them in the following scenarios. Do not perform table lock operations while running this command. If you skip the related precheck item, DTS runs this command automatically during instance initialization.-
When the instance runs for the first time.
-
When synchronizing at the schema level and a new table is created, or an existing synchronized table is rebuilt using the
RENAMEcommand. -
When using the feature to modify synchronization objects.
- In the command, replace
schemaandtablewith the schema name and table name of the data to be synchronized. - Perform this operation during off-peak hours. -
-
Temporary tables in source database: DTS creates the following temporary tables in the source database to capture DDL statements, schema information, and heartbeat data. Do not delete these tables during synchronization — they are automatically removed after the DTS instance is released:
public.dts_pg_class,public.dts_pg_attribute,public.dts_pg_type,public.dts_pg_enum,public.dts_postgres_heartbeat,public.dts_ddl_command,public.dts_args_session, andpublic.aliyun_dts_instance. -
Heartbeat table: DTS adds a heartbeat table named
dts_postgres_heartbeatto the source database to ensure synchronization latency accuracy. -
Replication slot: DTS creates a replication slot with the prefix
dts_sync_in the source database to replicate data. Using this replication slot, DTS can obtain incremental logs from the source database within the last 15 minutes. When synchronization fails or the instance is released, DTS attempts to clean up the slot automatically.ImportantIf you change the source database account password or remove the DTS IP address whitelist during synchronization, the replication slot cannot be cleaned up automatically. Manually clean up the replication slot to prevent disk space accumulation, which can make the source database unavailable. If a primary/secondary failover occurs, log on to the secondary database to perform the cleanup.

-
Performance impact: Initial full data synchronization consumes read and write resources on both the source and destination databases. Run synchronization during off-peak hours to avoid increased database load.
-
Tablespace growth: Initial full data synchronization runs concurrent INSERT operations, which causes fragmentation in the destination database tables. The tablespace in the destination instance is larger than in the source after initialization.
-
Online DDL during table-level sync: If no data other than DTS writes to the destination database, you can use Data Management (DMS) to perform online DDL operations. For more information, see Change schemas without locking tables. Do not allow data from sources other than DTS to be written to the destination database simultaneously — this causes data inconsistency.
-
Sequence validation: DTS validates data content but not metadata such as Sequences. Validate Sequences manually. After switching your business to the destination, new Sequences do not increment from the maximum value of the source Sequences. Update Sequence values in the destination before switching. For more information, see Update Sequence values in the destination database.
-
Instance failure recovery: If a data synchronization instance fails, DTS helpdesk attempts recovery within 8 hours. During recovery, operations such as restarting the instance or adjusting DTS instance parameters may be performed.
When parameters are adjusted, only the parameters of the DTS instance are modified. The parameters in the database are not modified. For details, see Modify instance parameters.
Foreign key behavior during synchronization
During schema synchronization, DTS synchronizes foreign keys from the source to the destination database. During full and incremental data synchronization, DTS temporarily disables constraint checks and cascade operations on foreign keys at the session level. Cascade update and delete operations on the source database during synchronization may cause data inconsistency.
Configure the synchronization task
Step 1: Go to the Data Synchronization page
Use one of the following methods to open the Data Synchronization page and select the region of the data synchronization instance.
DTS console
-
Log on to the DTS console
-
In the left-side navigation pane, click Data Synchronization.
-
In the upper-left corner, select the region where the synchronization task resides.
DMS console
The actual operations may vary based on the mode and layout of the DMS console. For more information, see Simple mode and Customize the layout and style of the DMS console.
-
Log on to the DMS console
-
In the top navigation bar, move the pointer over Data + AI and choose .
-
From the drop-down list to the right of Data Synchronization Tasks, select the region where the synchronization instance resides.
Step 2: Create the task
Click Create Task to open the task configuration page.
Step 3: Configure source and destination databases
Configure the following parameters for the source and destination databases.
| Category | Parameter | Description |
|---|---|---|
| N/A | Task Name | A descriptive name for the DTS task. DTS generates a name automatically. A unique name is not required. |
| Source Database | Select Existing Connection | Select a registered database instance from the drop-down list — DTS auto-fills the connection parameters. If the instance is not registered, configure the parameters below manually. In the DMS console, select from the Select a DMS database instance drop-down list. |
| Database Type | Select PostgreSQL. | |
| Connection Type | Select Alibaba Cloud Instance. | |
| Instance Region | Select the region of the source ApsaraDB RDS for PostgreSQL instance. | |
| Across Alibaba Cloud Accounts | Select Not Cross-account for synchronization within a single Alibaba Cloud account. | |
| Instance ID | Select the ID of the source ApsaraDB RDS for PostgreSQL instance. | |
| Database Name | Enter the name of the database containing the objects to be synchronized. | |
| Database Account | Enter the database account for the source instance. For required permissions, see Permissions required for database accounts. | |
| Database Password | Enter the password for the database account. | |
| Encryption | Select Non-encrypted or SSL-encrypted based on your requirements. For SSL encryption, upload CA Certificate, Client Certificate, and Private Key of Client Certificate as needed, and specify Private Key Password of Client Certificate. Important
If you set Encryption to SSL-encrypted for a self-managed PostgreSQL database, you must upload CA Certificate. If you want to use the client certificate, you must upload Client Certificate and Private Key of Client Certificate. For information about configuring SSL encryption on an ApsaraDB RDS for PostgreSQL instance, see SSL encryption. |
|
| Destination Database | Select Existing Connection | Select a registered database instance from the drop-down list — DTS auto-fills the connection parameters. If the instance is not registered, configure the parameters below manually. In the DMS console, select from the Select a DMS database instance drop-down list. |
| Database Type | Select PolarDB for PostgreSQL. | |
| Connection Type | Select Cloud Instance. | |
| Instance Region | Select the region of the destination PolarDB for PostgreSQL cluster. | |
| Instance ID | Select the ID of the destination PolarDB for PostgreSQL cluster. | |
| Database Name | Enter the name of the database in the destination cluster that will receive data. | |
| Database Account | Enter the database account for the destination cluster. For required permissions, see Permissions required for database accounts. | |
| Database Password | Enter the password for the database account. |
Step 4: Test connectivity and proceed
Click Test Connectivity and Proceed at the bottom of the page.
DTS server CIDR blocks must be added to the security settings of the source and destination databases. DTS can add these automatically, or you can add them manually. For more information, see Add DTS server IP addresses to a whitelist. If the source or destination database is a self-managed database with Connection Type not set to Alibaba Cloud Instance, click Test Connectivity in the CIDR Blocks of DTS Servers dialog box.
Step 5: Configure synchronization objects
-
In the Configure Objects step, configure the following parameters.
Parameter Description Synchronization Types Select Schema Synchronization, Full Data Synchronization, and Incremental Data Synchronization. By default, Incremental Data Synchronization is selected. After the precheck completes, DTS synchronizes historical data from the source to the destination as the basis for incremental synchronization. Synchronization Topology Select One-way Synchronization. Processing Mode of Conflicting Tables Precheck and Report Errors: checks for duplicate table names between source and destination. The precheck fails if duplicates exist. To resolve, use the object name mapping feature to rename tables. For more information, see Map object names. Ignore Errors and Proceed: skips the duplicate table name check. During full sync, existing records with matching primary or unique keys are retained. During incremental sync, matching records are overwritten. If schemas differ, some columns may fail to sync or the task may fail. Use with caution. Source Objects Select objects from the Source Objects section and click the right arrow icon to add them to the Selected Objects section. You can select schemas or tables. If you select tables, DTS does not migrate other object types (views, triggers, stored procedures) to the destination. If a table contains SERIAL fields and Schema Synchronization is selected, also select Sequence or synchronize the entire schema. Selected Objects To rename a single object in the destination, right-click it in Selected Objects. For more information, see Map the name of a single object. To rename multiple objects at once, click Batch Edit. For more information, see Map multiple object names at a time. To select specific SQL operations for an object, right-click it and select the desired operations. To filter data with a WHERE condition, right-click the table and set the condition. For more information, see Set filter conditions. -
Click Next: Advanced Settings and configure the following parameters.
Parameter Description Dedicated Cluster for Task Scheduling By default, DTS schedules the task to the shared cluster. Purchase a dedicated cluster to improve stability. For more information, see What is a DTS dedicated cluster. Retry Time for Failed Connections How long DTS retries when the source or destination database is unreachable. Valid values: 10–1440 minutes. Default: 720 minutes. Set to more than 30 minutes. If DTS reconnects within this period, the task resumes; otherwise, the task fails. If multiple tasks share the same source or destination database, the shortest retry time takes precedence. DTS charges for the instance during retries. Retry Time for Other Issues How long DTS retries after DDL or DML operation failures. Valid values: 1–1440 minutes. Default: 10 minutes. Set to more than 10 minutes. Must be smaller than Retry Time for Failed Connections. Enable Throttling for Full Data Synchronization Limits resource consumption during full data synchronization. Configure Queries per second (QPS) to the source database, RPS of Full Data Migration, and Data migration speed for full migration (MB/s). Available only when Full Data Synchronization is selected. Enable Throttling for Incremental Data Synchronization Limits resource consumption during incremental data synchronization. Configure RPS of Incremental Data Synchronization and Data synchronization speed for incremental synchronization (MB/s). Environment Tag An optional tag to identify the instance for your business needs. Configure ETL Whether to enable the extract, transform, and load (ETL) feature. Select Yesalert notification settings to enter data processing statements. For more information, see What is ETL? and Configure ETL in a data migration or data synchronization task. Monitoring and Alerting Whether to configure alerts for the sync instance. Select Yes to set alert thresholds and notification contacts. For more information, see Configure monitoring and alerting when you create a DTS task. -
Click Next Step: Data Verification to configure data verification. For details, see Configure a data verification task.
Step 6: Save settings and run the precheck
Click Next: Save Task Settings and Precheck at the bottom of the page.
To preview the API parameters used to configure this task, hover over Next: Save Task Settings and Precheck and click Preview OpenAPI parameters before saving.
DTS runs a precheck before starting the task. The task can only start after passing the precheck.
If the precheck fails, click View Details next to each failed item to analyze the cause, resolve the issue, and rerun the precheck.
If an alert is triggered for an item:
If the alert cannot be ignored, click View Details, resolve the issue, and run the precheck again.
If the alert can be ignored, click Confirm Alert Details, then click Ignore in the dialog box, click OK, and then click Precheck Again. Ignoring alerts may cause data inconsistency.
Step 7: Purchase the instance
-
Wait until Success Rate reaches 100%, then click Next: Purchase Instance.
-
On the purchase page, configure the following parameters.
Section
Parameter Description New Instance Class
Billing Method Subscription: pay upfront; more cost-effective for long-term use. Pay-as-you-go: billed hourly; suitable for short-term use. Release the instance when no longer needed to reduce costs. Resource Group Settings The resource group for the data synchronization instance. Default: default resource group. For more information, see What is Resource Management? Instance Class DTS provides instance classes with different synchronization speeds. Select based on your requirements. For more information, see Instance classes of data synchronization instances. Subscription Duration Available when Subscription is selected. Options: 1–9 months, 1 year, 2 years, 3 years, or 5 years. -
Read and select Data Transmission Service (Pay-as-you-go) Service Terms.
-
Click Buy and Start, then click OK in the confirmation dialog box.
The task appears in the task list. You can monitor its progress there.