All Products
Search
Document Center

Data Transmission Service:Synchronize a PolarDB for PostgreSQL (Compatible with Oracle) cluster to Alibaba Cloud Message Queue for Apache Kafka

Last Updated:Feb 04, 2026

This topic describes how to use Data Transmission Service (DTS) to synchronize data from a PolarDB for PostgreSQL (Compatible with Oracle) cluster to Alibaba Cloud Message Queue for Apache Kafka.

Prerequisites

  • Set the wal_level parameter of the source PolarDB for PostgreSQL (Compatible with Oracle) cluster to logical. This setting adds information required for logical decoding to the write-ahead log (WAL). For more information, see Configure cluster parameters.

  • Create a destination Alibaba Cloud Message Queue for Apache Kafka instance with storage space larger than the storage space used by the source PolarDB for PostgreSQL (Compatible with Oracle) instance.

    Note

    For supported versions of the source and destination databases, see Overview of synchronization scenarios.

  • Create a topic in the destination Alibaba Cloud Message Queue for Apache Kafka instance to receive synchronized data. For more information, see Step 1: Create a topic.

Notes

Type

Description

Source database limits

  • Bandwidth requirements: The server where the source database resides must have sufficient outbound bandwidth. Otherwise, the data synchronization speed is affected.

  • If a table to be synchronized has no primary key or UNIQUE constraint, you must enable the Exactly-Once write feature when you configure the task. Otherwise, duplicate data may appear in the destination database. For more information, see Synchronize tables without a primary key or UNIQUE constraint.

  • If the synchronization objects are tables and you need to edit them (for example, map table or column names), and the number of tables to be synchronized in a single task exceeds 1,000, we recommend that you split the tables into multiple tasks or configure a task to synchronize the entire database. Otherwise, a request error may be reported after you submit the task.

  • You must enable WAL. For incremental synchronization tasks, DTS requires that WAL logs of the source database be retained for more than 24 hours. For tasks that include both full and incremental synchronization, DTS requires that WAL logs be retained for at least 7 days. You can set the log retention period to more than 24 hours after full synchronization is complete. Otherwise, the DTS task may fail because it cannot obtain the WAL logs. In extreme cases, data inconsistency or loss may occur. Issues caused by a WAL log retention period that is shorter than the DTS requirement are not covered by the Service-Level Agreement (SLA).

  • If the source database has long-running transactions, the write-ahead log (WAL) that is generated before the long-running transactions are committed may accumulate during an incremental synchronization task. This can cause the disk space of the source database to become insufficient.

  • Limits on operations in the source database:

    • During schema synchronization and full data synchronization, do not perform DDL operations that change the database or table structure. Otherwise, the data synchronization task fails.

    • If you perform only full data synchronization, do not write new data to the source instance. Otherwise, data inconsistency between the source and destination databases occurs. To maintain real-time data consistency, we recommend that you select schema synchronization, full data synchronization, and incremental data synchronization.

    • To ensure that the synchronization task runs as expected and to prevent logical replication from being interrupted by a primary/secondary switchover, the PolarDB for PostgreSQL (Compatible with Oracle) cluster must support and enable Logical Replication Slot Failover.

      Note

      If the PolarDB for PostgreSQL (Compatible with Oracle) cluster does not support Logical Replication Slot Failover (for example, if the Database Engine of the cluster is Oracle syntax compatible 2.0), a high-availability (HA) switchover in the cluster may cause the synchronization instance to fail and become unrecoverable.

    • Due to the limits of logical replication in the source database, if a single piece of data to be synchronized exceeds 256 MB after an incremental change, the synchronization instance may fail and cannot be recovered. You must reconfigure the synchronization instance.

Other limits

  • A single data synchronization task can synchronize only one database. To synchronize multiple databases, configure a data synchronization task for each database.

  • DTS does not support synchronizing TimescaleDB extension tables, tables with cross-schema inheritance, or tables with unique indexes based on expressions.

  • Schemas created by installing plugins cannot be synchronized. You cannot obtain information about these schemas in the console when you configure the task.

  • DTS does not support synchronizing INDEX, PARTITION, VIEW, PROCEDURE, FUNCTION, TRIGGER, or FK.

  • During data synchronization, DTS creates a replication slot with the dts_sync_ prefix in the source database to replicate data. This replication slot allows DTS to obtain incremental logs from the source database within the last 15 minutes. When the data synchronization fails or the synchronization instance is released, DTS attempts to automatically clear the replication slot.

    Note
    • If you change the password of the source database account used by the task or delete the DTS IP address from the whitelist of the source database during data synchronization, the replication slot cannot be automatically cleared. In this case, you must manually clear the replication slot in the source database. This prevents the slot from continuously accumulating and consuming disk space, which can make the source database unavailable.

    • If a failover occurs in the source database, you must log on to the secondary database to manually clear the slot.

    Amazon slot查询信息

  • If the destination Kafka instance is scaled out or scaled in during data synchronization, you must restart the migration instance.

  • In the following three scenarios, you must run the ALTER TABLE schema.table REPLICA IDENTITY FULL; command on the tables to be synchronized in the source database before you write data to them. This ensures data consistency. Do not lock the tables while running this command to prevent deadlocks. If you skip the related precheck items, DTS automatically runs this command during the initialization of the instance.

    • When the instance runs for the first time.

    • When you select Schema as the granularity for object selection, and a new table is created in the schema or a table to be synchronized is rebuilt using the RENAME command.

    • When you use the feature to modify synchronization objects.

    Note
    • In the command, replace schema and table with the actual schema name and table name.

    • We recommend that you perform this operation during off-peak hours.

  • DTS creates the following temporary tables in the source database to obtain DDL statements for incremental data, the structure of incremental tables, and heartbeat information. Do not delete these temporary tables during synchronization. Otherwise, the DTS task becomes abnormal. The temporary tables are automatically deleted after the DTS instance is released.

    public.dts_pg_class, public.dts_pg_attribute, public.dts_pg_type, public.dts_pg_enum, public.dts_postgres_heartbeat, public.dts_ddl_command, public.dts_args_session, and public.aliyun_dts_instance.

  • To ensure the accuracy of the incremental data synchronization latency, DTS adds a heartbeat table named dts_postgres_heartbeat to the source database.

  • Evaluate the performance of the source and destination databases before you synchronize data. We also recommend that you synchronize data during off-peak hours (for example, when the CPU load of both databases is below 30%). Otherwise, full data synchronization consumes read and write resources on both the source and destination databases, which may increase the database load.

  • DTS attempts to automatically recover failed tasks within seven days. Therefore, before you switch your business to the destination instance, you must end or release the task, or revoke the write permissions of the account that DTS uses to access the destination instance. This prevents the source data from overwriting the data in the destination instance after the task is automatically recovered.

  • If the task fails, DTS technical support will attempt to recover it within 8 hours. During the recovery process, operations such as restarting the task or adjusting its parameters may be performed.

    Note

    When parameters are adjusted, only DTS task parameters are modified. Database parameters remain unchanged.The parameters that may be modified include but are not limited to those described in Modify instance parameters.

  • When you synchronize partitioned tables, you must include both the parent table and its child partitions as synchronization objects. Otherwise, data inconsistency may occur for the partitioned table.

    Note

    The parent table of a PostgreSQL partitioned table does not directly store data. All data is stored in the child partitions. The synchronization task must include the parent table and all its child partitions. Otherwise, data from the child partitions may not be synchronized, leading to data inconsistency between the source and destination.

Billing

Synchronization type

Pricing

Schema synchronization and full data synchronization

Free of charge.

Incremental data synchronization

Charged. For more information, see Billing overview.

SQL operations supported for incremental synchronization

Operation type

SQL statements

DML

INSERT, UPDATE, DELETE

DDL

  • CREATE TABLE, ALTER TABLE, DROP TABLE, RENAME TABLE, TRUNCATE TABLE

  • CREATE VIEW, ALTER VIEW, DROP VIEW

  • CREATE PROCEDURE, ALTER PROCEDURE, DROP PROCEDURE

  • CREATE FUNCTION, DROP FUNCTION

  • CREATE INDEX, DROP INDEX

Note

DDL statements are not synchronized in the following scenarios:

  • Additional information such as CASCADE and RESTRICT in DDL statements is not synchronized.

  • If a transaction contains both DML and DDL statements, the DDL statements are not synchronized.

  • If only partial DDL statements of a transaction are included in the data synchronization task, the DDL statements are not synchronized.

  • If a DDL statement is executed from a session that is created by executing the SET session_replication_role = replica statement, the DDL statement is not synchronized.

  • DDL statements that are executed by calling methods such as FUNCTION are not synchronized.

  • If no schema is defined in a DDL statement, the DDL statement is not synchronized. In this case, the public schema is specified in the SHOW search_path statement.

  • If a DDL statement contains IF NOT EXISTS, the DDL statement is not synchronized.

Database account permissions

Database

Permission requirements

Account creation and authorization method

PolarDB for PostgreSQL (Compatible with Oracle) cluster

Privileged account

Create and manage database accounts

Procedure

  1. Go to the data synchronization task list page in the destination region. You can do this in one of two ways.

    DTS console

    1. Log on to the DTS console.

    2. In the navigation pane on the left, click Data Synchronization.

    3. In the upper-left corner of the page, select the region where the synchronization instance is located.

    DMS console

    Note

    The actual steps may vary depending on the mode and layout of the DMS console. For more information, see Simple mode console and Customize the layout and style of the DMS console.

    1. Log on to the DMS console.

    2. In the top menu bar, choose Data + AI > DTS (DTS) > Data Synchronization.

    3. To the right of Data Synchronization Tasks, select the region of the synchronization instance.

  2. Click Create Task to open the task configuration page.

  3. Configure the source and destination databases.

    Note

    For information about how to obtain parameters for the destination Alibaba Cloud Message Queue for Apache Kafka instance, see Configure parameters for a Message Queue for Apache Kafka instance.

    Category

    Configuration

    Description

    None

    Task Name

    DTS automatically generates a task name. We recommend that you specify a descriptive name for easy identification. The name does not need to be unique.

    Source Database

    Select Existing Connection

    • Select the registered database instance with DTS from the drop-down list. The database information below is automatically configured.

      Note

      In the DMS console, this configuration item is Select a DMS database instance.

    • If you have not registered the database instance or do not need to use a registered instance, manually configure the database information below.

    Database Type

    Select PolarDB (Compatible with Oracle).

    Access Method

    Select Alibaba Cloud Instance.

    Instance Region

    Select the region where the source PolarDB for PostgreSQL (Compatible with Oracle) cluster resides.

    Replicate Data Across Alibaba Cloud Accounts

    For this example, select No, as the database instance belongs to the current Alibaba Cloud account.

    Instance ID

    Select the ID of the source PolarDB for PostgreSQL (Compatible with Oracle) cluster.

    Database Name

    Enter the name of the database in the source PolarDB for PostgreSQL (Compatible with Oracle) cluster that contains the objects to be synchronized.

    Database Account

    Enter the database account of the source PolarDB for PostgreSQL (Compatible with Oracle) cluster. For permission requirements, see Database account permissions.

    Database Password

    Enter the password for the specified database account.

    Destination Database

    Select Existing Connection

    • Select the registered database instance with DTS from the drop-down list. The database information below is automatically configured.

      Note

      In the DMS console, this configuration item is Select a DMS database instance.

    • If you have not registered the database instance or do not need to use a registered instance, manually configure the database information below.

    Database Type

    Select Kafka.

    Access Method

    Select Express Connect, VPN Gateway, or Smart Access Gateway.

    Note

    Here, the Alibaba Cloud Message Queue for Apache Kafka instance is configured as a self-managed Kafka database for data synchronization.

    Instance Region

    Select the region where the destination Alibaba Cloud Message Queue for Apache Kafka instance resides.

    Connected VPC

    Select the VPC ID of the destination Alibaba Cloud Message Queue for Apache Kafka instance.

    Domain Name or IP

    Enter any IP address from the Default Endpoint of the destination Alibaba Cloud Message Queue for Apache Kafka instance.

    Port Number

    Enter the service port of the destination Alibaba Cloud Message Queue for Apache Kafka instance. The default port is 9092.

    Database Account

    This example does not require this field.

    Database Password

    Kafka Version

    Select the version that matches your Kafka instance.

    Encryption

    Based on your business and security requirements, select Non-encrypted or SCRAM-SHA-256.

    Topic

    Select the topic that receives data from the drop-down list.

    Use Kafka Schema Registry

    Kafka Schema Registry is a metadata service layer that provides a RESTful interface for storing and retrieving Avro schemas.

    • No: Do not use Kafka Schema Registry.

    • Yes: Use Kafka Schema Registry. Enter the URL or IP address where the Avro schema is registered in Kafka Schema Registry in the URL or IP Address of Schema Registry text box.

  4. After completing the configuration, click Test Connectivity and Proceed at the bottom of the page.

    Note
    • Ensure that you add the CIDR blocks of the DTS servers (either automatically or manually) to the security settings of both the source and destination databases to allow access. For more information, see Add the IP address whitelist of DTS servers.

    • If the source or destination is a self-managed database (i.e., the Access Method is not Alibaba Cloud Instance), you must also click Test Connectivity in the CIDR Blocks of DTS Servers dialog box.

  5. Configure the task objects.

    1. On the Configure Objects page, specify the objects to synchronize.

      Configuration

      Description

      Synchronization Types

      DTS always selects Incremental Data Synchronization. By default, you must also select Schema Synchronization and Full Data Synchronization. After the precheck, DTS initializes the destination cluster with the full data of the selected source objects, which serves as the baseline for subsequent incremental synchronization.

      Note

      When the Access Method of the destination Kafka instance is Alibaba Cloud Instance, Schema Synchronization is not supported.

      Processing Mode of Conflicting Tables

      • Precheck and Report Errors: Checks for tables with the same names in the destination database. If any tables with the same names are found, an error is reported during the precheck and the data synchronization task does not start. Otherwise, the precheck is successful.

        Note

        If you cannot delete or rename the table with the same name in the destination database, you can map it to a different name in the destination. For more information, see Database Table Column Name Mapping.

      • Ignore Errors and Proceed: Skips the check for tables with the same name in the destination database.

        Warning

        Selecting Ignore Errors and Proceed may cause data inconsistency and put your business at risk. For example:

        • If the table schemas are consistent and a record in the destination database has the same primary key or unique key value as a record in the source database:

          • During full data synchronization, DTS retains the destination record and skips the source record.

          • During incremental synchronization, DTS overwrites the destination record with the source record.

        • If the table schemas are inconsistent, data initialization may fail. This can result in only partial data synchronization or a complete synchronization failure. Use with caution.

      Data Format in Kafka

      Select a data storage format for synchronization to the Kafka instance as needed.

      Kafka Data Compression Format

      Select the compression format for Kafka messages as needed.

      • LZ4 (default): low compression ratio and high compression speed.

      • GZIP: high compression ratio and low compression speed.

        Note

        This format consumes more CPU resources.

      • Snappy: medium compression ratio and medium compression speed.

      Policy for Shipping Data to Kafka Partitions

      Select a strategy as needed.

      Message acknowledgement mechanism

      Select a message acknowledgment mechanism as needed.

      Topic That Stores DDL Information

      Select a topic from the drop-down list to store DDL information.

      Note

      If you do not select a topic, the DDL information is stored in the topic that receives data by default.

      Capitalization of Object Names in Destination Instance

      Configure the case-sensitivity policy for database, table, and column names in the destination instance. By default, the DTS default policy is selected. You can also choose to use the default policy of the source or destination database. For more information, see Case policy for destination object names.

      Source Objects

      In the Source Objects box, click the objects, and then click 向右 to move them to the Selected Objects box.

      Note

      The selection granularity for synchronization objects is table.

      Selected Objects

      This example does not require additional configuration. You can use the mapping feature to set the topic name, number of partitions, and partition key for the source table in the destination Kafka instance. For more information, see Mapping information.

      Note
      • If you use the object name mapping feature, synchronization of other objects that depend on this object may fail.

      • To select SQL operations for incremental synchronization, right-click the object to be synchronized in Selected Objects and select the required SQL operations in the dialog box that appears.

    2. Click Next: Advanced Settings.

      Configuration

      Description

      Dedicated Cluster for Task Scheduling

      By default, DTS uses a shared cluster for tasks, so you do not need to make a selection. For greater task stability, you can purchase a dedicated cluster to run the DTS synchronization task. For more information, see What is a DTS dedicated cluster?.

      Retry Time for Failed Connections

      If the connection to the source or destination database fails after the synchronization task starts, DTS reports an error and immediately begins to retry the connection. The default retry duration is 720 minutes. You can customize the retry time to a value from 10 to 1,440 minutes. We recommend a duration of 30 minutes or more. If the connection is restored within this period, the task resumes automatically. Otherwise, the task fails.

      Note
      • If multiple DTS instances (e.g., Instance A and B) share a source or destination, DTS uses the shortest configured retry duration (e.g., 30 minutes for A, 60 for B, so 30 minutes is used) for all instances.

      • DTS charges for task runtime during connection retries. Set a custom duration based on your business needs, or release the DTS instance promptly after you release the source/destination instances.

      Retry Time for Other Issues

      If a non-connection issue (e.g., a DDL or DML execution error) occurs, DTS reports an error and immediately retries the operation. The default retry duration is 10 minutes. You can also customize the retry time to a value from 1 to 1,440 minutes. We recommend a duration of 10 minutes or more. If the related operations succeed within the set retry time, the synchronization task automatically resumes. Otherwise, the task fails.

      Important

      The value of Retry Time for Other Issues must be less than that of Retry Time for Failed Connections.

      Enable Throttling for Full Data Synchronization

      During full data synchronization, DTS consumes read and write resources from the source and destination databases, which can increase their load. To mitigate pressure on the destination database, you can limit the migration rate by setting Queries per second (QPS) to the source database, RPS of Full Data Migration, and Data migration speed for full migration (MB/s).

      Note

      Enable Throttling for Incremental Data Synchronization

      You can also limit the incremental synchronization rate to reduce pressure on the destination database by setting RPS of Incremental Data Synchronization and Data synchronization speed for incremental synchronization (MB/s).

      Environment Tag

      Select an environment label to identify the instance as needed. This example does not require selection.

      Configure ETL

      Choose whether to enable the extract, transform, and load (ETL) feature. For more information, see What is ETL? Valid values:

      Monitoring and Alerting

      Choose whether to set up alerts. If the synchronization fails or the latency exceeds the specified threshold, DTS sends a notification to the alert contacts.

  6. Save the task and perform a precheck.

    • To view the parameters for configuring this instance via an API operation, hover over the Next: Save Task Settings and Precheck button and click Preview OpenAPI parameters in the tooltip.

    • If you have finished viewing the API parameters, click Next: Save Task Settings and Precheck at the bottom of the page.

    Note
    • Before a synchronization task starts, DTS performs a precheck. You can start the task only if the precheck passes.

    • If the precheck fails, click View Details next to the failed item, fix the issue as prompted, and then rerun the precheck.

    • If the precheck generates warnings:

      • For non-ignorable warning, click View Details next to the item, fix the issue as prompted, and run the precheck again.

      • For ignorable warnings, you can bypass them by clicking Confirm Alert Details, then Ignore, and then OK. Finally, click Precheck Again to skip the warning and run the precheck again. Ignoring precheck warnings may lead to data inconsistencies and other business risks. Proceed with caution.

  7. Purchase the instance.

    1. When the Success Rate reaches 100%, click Next: Purchase Instance.

    2. On the Purchase page, select the billing method and link specifications for the data synchronization instance. For more information, see the following table.

      Category

      Parameter

      Description

      New Instance Class

      Billing Method

      • Subscription: You pay upfront for a specific duration. This is cost-effective for long-term, continuous tasks.

      • Pay-as-you-go: You are billed hourly for actual usage. This is ideal for short-term or test tasks, as you can release the instance at any time to save costs.

      Resource Group Settings

      The resource group to which the instance belongs. The default is default resource group. For more information, see What is Resource Management?.

      Instance Class

      DTS offers synchronization specifications at different performance levels that affect the synchronization rate. Select a specification based on your business requirements. For more information, see Data synchronization link specifications.

      Subscription Duration

      In subscription mode, select the duration and quantity of the instance. Monthly options range from 1 to 9 months. Yearly options include 1, 2, 3, or 5 years.

      Note

      This option appears only when the billing method is Subscription.

    3. Read and select the checkbox for Data Transmission Service (Pay-as-you-go) Service Terms.

    4. Click Buy and Start, and then click OK in the OK dialog box.

      You can monitor the task progress on the data synchronization page.

Mapping information

  1. In the Selected Objects area, place the mouse pointer over the destination topic name (at the table level).

  2. Click Edit next to the destination topic name.

  3. In the Edit Table dialog box that appears, configure mapping information.

    Note
    • At the schema level, the dialog box is Edit Schema, which supports fewer configurable parameters. At the table level, the dialog box is Edit Table.

    • If the granularity of synchronization objects is not the entire schema, you cannot modify the Name of target Topic and Number of Partitions in the Edit Schema dialog box.

    Configuration

    Description

    Name of target Topic

    The destination topic name for synchronizing the source table. By default, it is the Topic selected in Destination Database during the Configurations for Source and Destination Databases step.

    Important
    • When the destination database is an Alibaba Cloud Message Queue for Apache Kafka instance, the topic name you enter must exist in the destination Kafka instance. Otherwise, data synchronization fails. When the destination database is a self-managed Kafka database and the synchronization instance includes schema and table tasks, DTS attempts to create the topic you enter in the destination database.

    • If you modify the Name of target Topic, data is written to the topic you enter.

    Filter Conditions

    For more information, see Set filter conditions.

    Number of Partitions

    The number of partitions for writing data to the destination topic.

    Partition Key

    When Policy for Shipping Data to Kafka Partitions is set to Ship Data to Separate Partitions Based on Hash Values of Primary Keys, configure this parameter to specify one or more columns as the Partition Key for hash calculation. DTS delivers different rows to partitions in the destination topic based on the calculated hash value. Otherwise, this delivery strategy does not take effect during incremental writes.

    Note

    You can select Partition Key only in the Edit Table dialog box.

  4. Click OK.

FAQ

  • Can I modify the Kafka Data Compression Format?

    Yes, you can. You can use the Modify Sync Objects feature.

  • Can I modify the Message acknowledgement mechanism?

    Yes, you can. You can use the Modify Sync Objects feature.