All Products
Search
Document Center

Data Transmission Service:One-way synchronization between PolarDB for PostgreSQL (Compatible with Oracle) clusters

Last Updated:Feb 04, 2026

This topic describes how to use Data Transmission Service (DTS) to configure one-way synchronization between PolarDB for PostgreSQL (Compatible with Oracle) clusters.

Prerequisites

  • You have created a destination PolarDB for PostgreSQL (Compatible with Oracle) cluster with a storage capacity larger than the used storage space of the source PolarDB for PostgreSQL (Compatible with Oracle) cluster. For more information, see Create a PolarDB for PostgreSQL (Compatible with Oracle) cluster.

  • You have created a database in the destination PolarDB for PostgreSQL (Compatible with Oracle) cluster to receive data. For more information, see Database management.

  • In the source PolarDB for PostgreSQL (Compatible with Oracle) cluster, you have set the wal_level parameter to logical. This setting adds the information required for logical decoding to the write-ahead log (WAL). For more information, see Configure cluster parameters.

Notes

Note
  • During schema synchronization, DTS synchronizes foreign keys from the source database to the destination database.

  • During full data synchronization and incremental data synchronization, DTS temporarily disables constraint checks and foreign key cascade operations at the session level. Data inconsistency may occur if cascade update or delete operations are performed on the source database while the task is running.

Type

Description

Source database limits

  • Bandwidth requirements: The server where the source database resides must have sufficient outbound bandwidth. Otherwise, the data synchronization speed is affected.

  • If a table to be synchronized has no primary key or UNIQUE constraint, you must enable the Exactly-Once write feature when you configure the task. Otherwise, duplicate data may appear in the destination database. For more information, see Synchronize tables without a primary key or UNIQUE constraint.

  • If the synchronization objects are tables and you need to edit them (for example, map table or column names), and the number of tables to be synchronized in a single task exceeds 1,000, we recommend that you split the tables into multiple tasks or configure a task to synchronize the entire database. Otherwise, a request error may be reported after you submit the task.

  • You must enable WAL. For incremental synchronization tasks, DTS requires that WAL logs of the source database be retained for more than 24 hours. For tasks that include both full and incremental synchronization, DTS requires that WAL logs be retained for at least 7 days. You can set the log retention period to more than 24 hours after full synchronization is complete. Otherwise, the DTS task may fail because it cannot obtain the WAL logs. In extreme cases, data inconsistency or loss may occur. Issues caused by a WAL log retention period that is shorter than the DTS requirement are not covered by the Service-Level Agreement (SLA).

  • If the source database has long-running transactions, the write-ahead log (WAL) that is generated before the long-running transactions are committed may accumulate during an incremental synchronization task. This can cause the disk space of the source database to become insufficient.

  • Limits on operations in the source database:

    • During schema synchronization and full data synchronization, do not perform DDL operations that change the database or table structure. Otherwise, the data synchronization task fails.

    • If you perform only full data synchronization, do not write new data to the source instance. Otherwise, data inconsistency between the source and destination databases occurs. To maintain real-time data consistency, we recommend that you select schema synchronization, full data synchronization, and incremental data synchronization.

    • To ensure that the synchronization task runs as expected and to prevent logical replication from being interrupted by a primary/secondary switchover, the PolarDB for PostgreSQL (Compatible with Oracle) cluster must support and enable Logical Replication Slot Failover.

      Note

      If the PolarDB for PostgreSQL (Compatible with Oracle) cluster does not support Logical Replication Slot Failover (for example, if the Database Engine of the cluster is Oracle syntax compatible 2.0), a high-availability (HA) switchover in the cluster may cause the synchronization instance to fail and become unrecoverable.

    • Due to the limits of logical replication in the source database, if a single piece of data to be synchronized exceeds 256 MB after an incremental change, the synchronization instance may fail and cannot be recovered. You must reconfigure the synchronization instance.

Other limits

  • A single data synchronization task can synchronize only one database. To synchronize multiple databases, configure a data synchronization task for each database.

  • DTS does not support synchronizing TimescaleDB extension tables, tables with cross-schema inheritance, or tables with unique indexes based on expressions.

  • Schemas created by installing plugins cannot be synchronized. You cannot obtain information about these schemas in the console when you configure the task.

  • If a table to be synced contains a field of the SERIAL type, the source database automatically creates a Sequence for that field. Therefore, when you configure Source Objects, if you select Schema Synchronization for the Synchronization Types, we recommend that you also select Sequence or synchronize the entire schema. Otherwise, the synchronization instance may fail to run.

  • In the following three scenarios, you must run the ALTER TABLE schema.table REPLICA IDENTITY FULL; command on the tables to be synchronized in the source database before you write data to them. This ensures data consistency. Do not lock the tables while running this command to prevent deadlocks. If you skip the related precheck items, DTS automatically runs this command during the initialization of the instance.

    • When the instance runs for the first time.

    • When you select Schema as the granularity for object selection, and a new table is created in the schema or a table to be synchronized is rebuilt using the RENAME command.

    • When you use the feature to modify synchronization objects.

    Note
    • In the command, replace schema and table with the actual schema name and table name.

    • We recommend that you perform this operation during off-peak hours.

  • DTS creates the following temporary tables in the source database to obtain DDL statements for incremental data, the structure of incremental tables, and heartbeat information. Do not delete these temporary tables during synchronization. Otherwise, the DTS task becomes abnormal. The temporary tables are automatically deleted after the DTS instance is released.

    public.dts_pg_class, public.dts_pg_attribute, public.dts_pg_type, public.dts_pg_enum, public.dts_postgres_heartbeat, public.dts_ddl_command, public.dts_args_session, and public.aliyun_dts_instance.

  • To ensure the accuracy of the incremental data synchronization latency, DTS adds a heartbeat table named dts_postgres_heartbeat to the source database.

  • During data synchronization, DTS creates a replication slot with the dts_sync_ prefix in the source database to replicate data. This replication slot allows DTS to obtain incremental logs from the source database within the last 15 minutes. When the data synchronization fails or the synchronization instance is released, DTS attempts to automatically clear the replication slot.

    Note
    • If you change the password of the source database account used by the task or delete the DTS IP address from the whitelist of the source database during data synchronization, the replication slot cannot be automatically cleared. In this case, you must manually clear the replication slot in the source database. This prevents the slot from continuously accumulating and consuming disk space, which can make the source database unavailable.

    • If a failover occurs in the source database, you must log on to the secondary database to manually clear the slot.

    Amazon slot查询信息

  • After you switch your business to the destination instance, new sequences do not increment from the maximum value of the source sequence. You must update the sequence value in the destination database before the business switchover. For more information, see Update the sequence value in the destination database.

  • Evaluate the performance of the source and destination databases before you synchronize data. We also recommend that you synchronize data during off-peak hours (for example, when the CPU load of both databases is below 30%). Otherwise, full data synchronization consumes read and write resources on both the source and destination databases, which may increase the database load.

  • Because full data synchronization runs concurrent INSERT operations, it causes table fragmentation in the destination database. As a result, the table space in the destination database is larger than that in the source instance after full synchronization is complete.

  • For columns of the FLOAT or DOUBLE data type, DTS reads the values using ROUND(COLUMN,PRECISION). If you do not explicitly define the precision, DTS uses a default precision of 38 for FLOAT and 308 for DOUBLE. Confirm that the synchronization precision meets your business requirements.

  • DTS attempts to automatically recover failed tasks within seven days. Therefore, before you switch your business to the destination instance, you must end or release the task, or use the REVOKE command to revoke the write permissions of the account that DTS uses to access the destination instance. This prevents the source data from overwriting the data in the destination instance after the task is recovered.

  • For a full or incremental synchronization task, if the tables to be synchronized in the source database contain foreign keys, triggers, or event triggers, DTS temporarily sets the `session_replication_role` parameter to `replica` at the session level if the destination database account is a privileged account or has superuser permissions. If the destination database account does not have these permissions, you must manually set the `session_replication_role` parameter to `replica` in the destination database. During this period (while `session_replication_role` is `replica`), if cascade update or delete operations occur in the source database, data inconsistency may occur. After the DTS task is released, you can change the `session_replication_role` parameter back to `origin`.

  • If the task fails, DTS technical support will attempt to recover it within 8 hours. During the recovery process, operations such as restarting the task or adjusting its parameters may be performed.

    Note

    When parameters are adjusted, only DTS task parameters are modified. Database parameters remain unchanged.The parameters that may be modified include but are not limited to those described in Modify instance parameters.

  • When you synchronize partitioned tables, you must include both the parent table and its child partitions as synchronization objects. Otherwise, data inconsistency may occur for the partitioned table.

    Note

    The parent table of a PostgreSQL partitioned table does not directly store data. All data is stored in the child partitions. The synchronization task must include the parent table and all its child partitions. Otherwise, data from the child partitions may not be synchronized, leading to data inconsistency between the source and destination.

Billing

Synchronization type

Pricing

Schema synchronization and full data synchronization

Free of charge.

Incremental data synchronization

Charged. For more information, see Billing overview.

Supported objects to be synchronized

  • SCHEMA, TABLE

    Note

    This includes PRIMARY KEY, UNIQUE KEY, FOREIGN KEY, DATATYPE (built-in data types), and DEFAULT CONSTRAINT.

  • VIEW, PROCEDURE (PostgreSQL 11 or later), FUNCTION, RULE, SEQUENCE, EXTENSION, TRIGGER, AGGREGATE, INDEX, OPERATOR, DOMAIN

Supported SQL operations

Operation

SQL statement

DML

INSERT, UPDATE, and DELETE

DDL

  • CREATE TABLE and DROP TABLE

  • ALTER TABLE (This includes RENAME TABLE, ADD COLUMN, ADD COLUMN DEFAULT, ALTER COLUMN TYPE, DROP COLUMN, ADD CONSTRAINT, ADD CONSTRAINT CHECK, and ALTER COLUMN DROP DEFAULT.)

  • CREATE INDEX ON TABLE

Note

DDL statements are not synchronized in the following scenarios:

  • Additional information such as CASCADE and RESTRICT in DDL statements is not synchronized.

  • If a transaction contains both DML and DDL statements, the DDL statements are not synchronized.

  • If only partial DDL statements of a transaction are included in the data synchronization task, the DDL statements are not synchronized.

  • If a DDL statement is executed from a session that is created by executing the SET session_replication_role = replica statement, the DDL statement is not synchronized.

  • DDL statements that are executed by calling methods such as FUNCTION are not synchronized.

  • If no schema is defined in a DDL statement, the DDL statement is not synchronized. In this case, the public schema is specified in the SHOW search_path statement.

  • If a DDL statement contains IF NOT EXISTS, the DDL statement is not synchronized.

Database account permissions

Database

Required permissions

Creation and authorization method

Source PolarDB for PostgreSQL (Compatible with Oracle) cluster

Privileged account

Create a database account.

Destination PolarDB for PostgreSQL (Compatible with Oracle) cluster

Database owner

Create a database account and Database management.

Note

The database owner is specified when creating the database.

Procedure

  1. Go to the data synchronization task list page in the destination region. You can do this in one of two ways.

    DTS console

    1. Log on to the DTS console.

    2. In the navigation pane on the left, click Data Synchronization.

    3. In the upper-left corner of the page, select the region where the synchronization instance is located.

    DMS console

    Note

    The actual steps may vary depending on the mode and layout of the DMS console. For more information, see Simple mode console and Customize the layout and style of the DMS console.

    1. Log on to the DMS console.

    2. In the top menu bar, choose Data + AI > DTS (DTS) > Data Synchronization.

    3. To the right of Data Synchronization Tasks, select the region of the synchronization instance.

  2. Click Create Task to open the task configuration page.

  3. Configure the source and destination databases.

    Category

    Configuration

    Description

    None

    Task Name

    DTS automatically generates a task name. We recommend that you specify a descriptive name for easy identification. The name does not need to be unique.

    Source Database

    Select Existing Connection

    • Select the registered database instance with DTS from the drop-down list. The database information below is automatically configured.

      Note

      In the DMS console, this configuration item is Select a DMS database instance.

    • If you have not registered the database instance or do not need to use a registered instance, manually configure the database information below.

    Database Type

    Select PolarDB (Compatible with Oracle).

    Access Method

    Select Alibaba Cloud Instance.

    Instance Region

    Select the region where the source PolarDB for PostgreSQL (Compatible with Oracle) cluster resides.

    Replicate Data Across Alibaba Cloud Accounts

    For this example, select No, as the database instance belongs to the current Alibaba Cloud account.

    Instance ID

    Select the ID of the source PolarDB for PostgreSQL (Compatible with Oracle) cluster.

    Database Name

    Enter the name of the database in the source PolarDB for PostgreSQL (Compatible with Oracle) cluster that contains the objects to be synchronized.

    Database Account

    Enter the database account for the source PolarDB for PostgreSQL (Compatible with Oracle) cluster. For permission requirements, see Database account permissions.

    Database Password

    Enter the password for the specified database account.

    Destination Database

    Select Existing Connection

    • Select the registered database instance with DTS from the drop-down list. The database information below is automatically configured.

      Note

      In the DMS console, this configuration item is Select a DMS database instance.

    • If you have not registered the database instance or do not need to use a registered instance, manually configure the database information below.

    Database Type

    Select PolarDB (Compatible with Oracle).

    Access Method

    Select Alibaba Cloud Instance.

    Instance Region

    Select the region where the destination PolarDB for PostgreSQL (Compatible with Oracle) cluster resides.

    Instance ID

    Select the ID of the destination PolarDB for PostgreSQL (Compatible with Oracle) cluster.

    Database Name

    Enter the name of the database in the destination PolarDB for PostgreSQL (Compatible with Oracle) cluster that will receive data.

    Database Account

    Enter the database account for the destination PolarDB for PostgreSQL (Compatible with Oracle) cluster. For permission requirements, see Database account permissions.

    Database Password

    Enter the password for the specified database account.

  4. After completing the configuration, click Test Connectivity and Proceed at the bottom of the page.

    Note
    • Ensure that you add the CIDR blocks of the DTS servers (either automatically or manually) to the security settings of both the source and destination databases to allow access. For more information, see Add the IP address whitelist of DTS servers.

    • If the source or destination is a self-managed database (i.e., the Access Method is not Alibaba Cloud Instance), you must also click Test Connectivity in the CIDR Blocks of DTS Servers dialog box.

  5. Configure the task objects.

    1. On the Configure Objects page, specify the objects to synchronize.

      Configuration

      Description

      Synchronization Types

      DTS always selects Incremental Data Synchronization. By default, you must also select Schema Synchronization and Full Data Synchronization. After the precheck, DTS initializes the destination cluster with the full data of the selected source objects, which serves as the baseline for subsequent incremental synchronization.

      Synchronization Topology

      Select One-way Synchronization.

      Processing Mode of Conflicting Tables

      • Precheck and Report Errors: Checks for tables with the same names in the destination database. If any tables with the same names are found, an error is reported during the precheck and the data synchronization task does not start. Otherwise, the precheck is successful.

        Note

        If you cannot delete or rename the table with the same name in the destination database, you can map it to a different name in the destination. For more information, see Database Table Column Name Mapping.

      • Ignore Errors and Proceed: Skips the check for tables with the same name in the destination database.

        Warning

        Selecting Ignore Errors and Proceed may cause data inconsistency and put your business at risk. For example:

        • If the table schemas are consistent and a record in the destination database has the same primary key or unique key value as a record in the source database:

          • During full data synchronization, DTS retains the destination record and skips the source record.

          • During incremental synchronization, DTS overwrites the destination record with the source record.

        • If the table schemas are inconsistent, data initialization may fail. This can result in only partial data synchronization or a complete synchronization failure. Use with caution.

      Capitalization of Object Names in Destination Instance

      Configure the case-sensitivity policy for database, table, and column names in the destination instance. By default, the DTS default policy is selected. You can also choose to use the default policy of the source or destination database. For more information, see Case policy for destination object names.

      Source Objects

      In the Source Objects box, click the objects, and then click 向右 to move them to the Selected Objects box.

      Note
      • You can select synchronization objects at the schema or table level. If you select tables, other objects such as views, triggers, and stored procedures will not be synchronized to the destination database.

      • If a table to be synchronized contains SERIAL data type, and you select Schema Synchronization as the Synchronization Types, we recommend that you also select Sequence or entire schema synchronization.

      Selected Objects

      • To rename a single object in the destination instance, right-click the object in the Selected Objects box. For more information, see Map a single object name.

      • To rename multiple objects in bulk, click Batch Edit in the upper-right corner of the Selected Objects box. For more information, see Map multiple object names in bulk.

      Note
      • To select the SQL operations to synchronize at the schema or table level, right-click the object in the Selected Objects box, and select the required SQL operations in the dialog box that appears. For information about supported operations, see Supported SQL operations.

      • To set WHERE clause filters, right-click the table in the Selected Objects box and set the filter conditions in the dialog box that appears. For more information, see Set filter conditions.

      • If you use the object name mapping feature, dependent objects may fail to synchronize.

    2. Click Next: Advanced Settings.

      Configuration

      Description

      Dedicated Cluster for Task Scheduling

      By default, DTS uses a shared cluster for tasks, so you do not need to make a selection. For greater task stability, you can purchase a dedicated cluster to run the DTS synchronization task. For more information, see What is a DTS dedicated cluster?.

      Retry Time for Failed Connections

      If the connection to the source or destination database fails after the synchronization task starts, DTS reports an error and immediately begins to retry the connection. The default retry duration is 720 minutes. You can customize the retry time to a value from 10 to 1,440 minutes. We recommend a duration of 30 minutes or more. If the connection is restored within this period, the task resumes automatically. Otherwise, the task fails.

      Note
      • If multiple DTS instances (e.g., Instance A and B) share a source or destination, DTS uses the shortest configured retry duration (e.g., 30 minutes for A, 60 for B, so 30 minutes is used) for all instances.

      • DTS charges for task runtime during connection retries. Set a custom duration based on your business needs, or release the DTS instance promptly after you release the source/destination instances.

      Retry Time for Other Issues

      If a non-connection issue (e.g., a DDL or DML execution error) occurs, DTS reports an error and immediately retries the operation. The default retry duration is 10 minutes. You can also customize the retry time to a value from 1 to 1,440 minutes. We recommend a duration of 10 minutes or more. If the related operations succeed within the set retry time, the synchronization task automatically resumes. Otherwise, the task fails.

      Important

      The value of Retry Time for Other Issues must be less than that of Retry Time for Failed Connections.

      Enable Throttling for Full Data Synchronization

      During full data synchronization, DTS consumes read and write resources from the source and destination databases, which can increase their load. To mitigate pressure on the destination database, you can limit the migration rate by setting Queries per second (QPS) to the source database, RPS of Full Data Migration, and Data migration speed for full migration (MB/s).

      Note

      Enable Throttling for Incremental Data Synchronization

      You can also limit the incremental synchronization rate to reduce pressure on the destination database by setting RPS of Incremental Data Synchronization and Data synchronization speed for incremental synchronization (MB/s).

      Environment Tag

      If needed, you can select an environment tag to identify the instance. This setting is optional.

      Configure ETL

      Choose whether to enable the extract, transform, and load (ETL) feature. For more information, see What is ETL? Valid values:

      Monitoring and Alerting

      Choose whether to set up alerts. If the synchronization fails or the latency exceeds the specified threshold, DTS sends a notification to the alert contacts.

    3. Click Next: Data Validation to configure a data validation task.

      For more information about the data validation feature, see Configure data validation.

  6. Save the task and perform a precheck.

    • To view the parameters for configuring this instance via an API operation, hover over the Next: Save Task Settings and Precheck button and click Preview OpenAPI parameters in the tooltip.

    • If you have finished viewing the API parameters, click Next: Save Task Settings and Precheck at the bottom of the page.

    Note
    • Before a synchronization task starts, DTS performs a precheck. You can start the task only if the precheck passes.

    • If the precheck fails, click View Details next to the failed item, fix the issue as prompted, and then rerun the precheck.

    • If the precheck generates warnings:

      • For non-ignorable warning, click View Details next to the item, fix the issue as prompted, and run the precheck again.

      • For ignorable warnings, you can bypass them by clicking Confirm Alert Details, then Ignore, and then OK. Finally, click Precheck Again to skip the warning and run the precheck again. Ignoring precheck warnings may lead to data inconsistencies and other business risks. Proceed with caution.

  7. Purchase the instance.

    1. When the Success Rate reaches 100%, click Next: Purchase Instance.

    2. On the Purchase page, select the billing method and link specifications for the data synchronization instance. For more information, see the following table.

      Category

      Parameter

      Description

      New Instance Class

      Billing Method

      • Subscription: You pay upfront for a specific duration. This is cost-effective for long-term, continuous tasks.

      • Pay-as-you-go: You are billed hourly for actual usage. This is ideal for short-term or test tasks, as you can release the instance at any time to save costs.

      Resource Group Settings

      The resource group to which the instance belongs. The default is default resource group. For more information, see What is Resource Management?.

      Instance Class

      DTS offers synchronization specifications at different performance levels that affect the synchronization rate. Select a specification based on your business requirements. For more information, see Data synchronization link specifications.

      Subscription Duration

      In subscription mode, select the duration and quantity of the instance. Monthly options range from 1 to 9 months. Yearly options include 1, 2, 3, or 5 years.

      Note

      This option appears only when the billing method is Subscription.

    3. Read and select the checkbox for Data Transmission Service (Pay-as-you-go) Service Terms.

    4. Click Buy and Start, and then click OK in the OK dialog box.

      You can monitor the task progress on the data synchronization page.