All Products
Search
Document Center

Data Transmission Service:Notes and limits for a PostgreSQL source database

Last Updated:Jan 31, 2026

If your source database for data synchronization is a PostgreSQL database, such as an ApsaraDB RDS for PostgreSQL instance or a self-managed PostgreSQL database, review the notes and limits in this topic before you configure a data synchronization task. This helps ensure that your data synchronization task runs as expected.

Overview of synchronization solutions for a PostgreSQL source database

Refer to the notes and limits for your synchronization task based on the solution:

Note

By default, DTS disables foreign key constraints when synchronizing data to a destination database. Therefore, operations such as cascade and delete in the source database are not synchronized to the following destination databases:

  • ApsaraDB RDS for PostgreSQL

  • AnalyticDB for PostgreSQL

  • PolarDB for PostgreSQL (Compatible with Oracle)

  • ApsaraDB RDS for MySQL

  • PolarDB for PostgreSQL

Synchronization between PostgreSQL databases

  • One-way synchronization between ApsaraDB RDS for PostgreSQL instances

    Type

    Description

    Source database limits

    • The tables to be synchronized must have a primary key or a UNIQUE constraint, and the fields must be unique. Otherwise, duplicate data may occur in the destination database.

      Note

      If the destination table is not created by DTS (that is, the Synchronization Types is not set to Schema Synchronization), you must ensure that the table has the same primary key or non-null UNIQUE constraint as the table to be synchronized from the source database. Otherwise, duplicate data may occur in the destination database.

    • The name of the database to be synchronized cannot contain hyphens (-), for example, dts-testdata.

    • If you synchronize data at the table level and need to edit objects, such as by mapping table or column names, a single task supports a maximum of 5,000 tables. To synchronize more tables, split them into multiple tasks or configure a task to synchronize the entire database. Otherwise, a request error may occur after you submit the task.

    • DTS does not support synchronizing temporary tables, internal system triggers, or some functions (C-language functions and internal functions for PROCEDURE and FUNCTION) from the source database. DTS supports synchronizing some custom data types (COMPOSITE, ENUM, or RANGE) and the following constraints: primary key, foreign key, unique, and CHECK.

    • Write-ahead log (WAL):

      • The WAL feature must be enabled. Set the wal_level parameter to logical.

      • For an incremental synchronization task, DTS requires that the WAL logs of the source database are retained for more than 24 hours. For a task that includes both full and incremental synchronization, DTS requires that the WAL logs are retained for at least 7 days. After the full synchronization is complete, you can change the log retention period to more than 24 hours. If the retention period is shorter than required, the DTS task may fail because it cannot obtain the WAL logs. In extreme cases, data inconsistency or data loss may occur. Issues caused by a log retention period shorter than the required period are not covered by the DTS Service-Level Agreement (SLA).

    • Source database operation limits

      • To ensure that the synchronization task runs as expected and to prevent logical subscription interruptions caused by failover, you must enable Logical Replication Slot Failover for the ApsaraDB RDS for PostgreSQL instance. For more information, see Logical Replication Slot Failover.

      • Due to the limits of logical replication in the source database, if a single piece of data to be synchronized exceeds 256 MB after an incremental change, the synchronization instance may fail and cannot be recovered. You must reconfigure the synchronization instance.

      • During schema synchronization and full data synchronization, do not perform Data Definition Language (DDL) operations that change the schema of databases or tables. Otherwise, the data synchronization task fails.

        Note

        During the full synchronization phase, DTS queries the source database, which acquires metadata locks. This may block DDL operations on the source database.

    • If the source instance has long-running transactions and the task includes incremental synchronization, the write-ahead log (WAL) that occurs before the long-running transactions are committed cannot be cleared. This may cause the WAL logs to accumulate and exhaust the disk space of the source instance.

    • If you perform a major engine version upgrade on the source database while the synchronization instance is running, the instance fails and cannot be recovered. You must reconfigure the synchronization instance.

    Other limits

    • A single data synchronization task can synchronize only one database. To synchronize multiple databases, you must configure a separate task for each database.

    • DTS does not support synchronizing TimescaleDB extension tables, tables with cross-schema inheritance, or tables with unique indexes based on expressions.

    • Schemas created by installing extensions are not supported. You cannot find information about these schemas in the console when you configure the task.

    • If a table to be synced contains a field of the SERIAL type, the source database automatically creates a Sequence for that field. Therefore, when you configure Source Objects, if you select Schema Synchronization for the Synchronization Types, we recommend that you also select Sequence or synchronize the entire schema. Otherwise, the synchronization instance may fail to run.

    • In the following three scenarios, you must run the ALTER TABLE schema.table REPLICA IDENTITY FULL; command on the tables to be synchronized in the source database before you write data to them. This ensures data consistency. Do not lock the tables while running this command to prevent deadlocks. If you skip the related precheck items, DTS automatically runs this command during the initialization of the instance.

      • When the instance runs for the first time.

      • When you select Schema as the granularity for object selection, and a new table is created in the schema or a table to be synchronized is rebuilt using the RENAME command.

      • When you use the feature to modify synchronization objects.

      Note
      • In the command, replace schema and table with the actual schema name and table name.

      • We recommend that you perform this operation during off-peak hours.

    • DTS validates data content but does not support validation for metadata such as sequences. You must validate the metadata yourself.

    • After you switch your business to the destination instance, new sequences do not increment from the maximum value of the source sequence. You must update the sequence value in the destination database before the business switchover. For more information, see Update the sequence value in the destination database.

    • DTS creates the following temporary tables in the source database to obtain DDL statements for incremental data, the structure of incremental tables, and heartbeat information. Do not delete these temporary tables during synchronization. Otherwise, the DTS task becomes abnormal. The temporary tables are automatically deleted after the DTS instance is released.

      public.dts_pg_class, public.dts_pg_attribute, public.dts_pg_type, public.dts_pg_enum, public.dts_postgres_heartbeat, public.dts_ddl_command, public.dts_args_session, and public.aliyun_dts_instance.

    • To ensure the accuracy of the displayed synchronization latency, DTS adds a heartbeat table named dts_postgres_heartbeat to the source database.

    • During data synchronization, DTS creates a replication slot with the dts_sync_ prefix in the source database to replicate data. This replication slot allows DTS to obtain incremental logs from the source database within the last 15 minutes. When the data synchronization fails or the synchronization instance is released, DTS attempts to automatically clear the replication slot.

      Note
      • If you change the password of the source database account used by the task or delete the DTS IP address from the whitelist of the source database during data synchronization, the replication slot cannot be automatically cleared. In this case, you must manually clear the replication slot in the source database. This prevents the slot from continuously accumulating and consuming disk space, which can make the source database unavailable.

      • If a failover occurs in the source database, you must log on to the secondary database to manually clear the slot.

      Amazon slot查询信息

    • Before you synchronize data, evaluate the performance of the source and destination databases. We recommend that you synchronize data during off-peak hours. Otherwise, initial full data synchronization consumes read and write resources on both the source and destination databases, which may increase the database load.

    • Initial full data synchronization runs concurrent INSERT operations. This causes table fragmentation in the destination database. As a result, the tablespace of the destination instance is larger than that of the source instance after the initial full data synchronization is complete.

    • For table-level data synchronization, if no data other than the data from DTS is written to the destination database, you can use Data Management (DMS) to perform online DDL operations. For more information, see Perform schema changes without locking tables.

    • During DTS synchronization, do not write data from other sources to the destination database. This can cause data inconsistency between the source and destination databases. For example, if you use DMS to perform online DDL operations while data from other sources is being written to the destination database, data loss may occur in the destination database.

    • For a full or incremental synchronization task, if the tables to be synchronized in the source database contain foreign keys, triggers, or event triggers, DTS temporarily sets the `session_replication_role` parameter to `replica` at the session level if the destination database account is a privileged account or has superuser permissions. If the destination database account does not have these permissions, you must manually set the `session_replication_role` parameter to `replica` in the destination database. During this period (while `session_replication_role` is `replica`), if cascade update or delete operations occur in the source database, data inconsistency may occur. After the DTS task is released, you can change the `session_replication_role` parameter back to `origin`.

    • If the task fails, DTS technical support will attempt to recover it within 8 hours. During the recovery process, operations such as restarting the task or adjusting its parameters may be performed.

      Note

      When parameters are adjusted, only DTS task parameters are modified. Database parameters remain unchanged.The parameters that may be modified include but are not limited to those described in Modify instance parameters.

    • When you synchronize partitioned tables, you must include both the parent table and its child partitions as synchronization objects. Otherwise, data inconsistency may occur for the partitioned table.

      Note

      The parent table of a PostgreSQL partitioned table does not directly store data. All data is stored in the child partitions. The synchronization task must include the parent table and all its child partitions. Otherwise, data from the child partitions may not be synchronized, leading to data inconsistency between the source and destination.

    Special cases

    • When the source instance is an ApsaraDB RDS for PostgreSQL instance

      During synchronization, do not change the endpoint or zone of the ApsaraDB RDS for PostgreSQL instance. Otherwise, the synchronization fails.

    • When the source instance is a self-managed PostgreSQL database

      Make sure that the values of the `max_wal_senders` and `max_replication_slots` parameters are greater than the sum of the number of replication slots currently in use and the number of DTS instances to be created with this self-managed PostgreSQL database as the source.

    • When the source instance is Google Cloud Platform Cloud SQL for PostgreSQL, you must provide an account that has the `cloudsqlsuperuser` permission for the Database Account of the source database. When you select objects to sync, select objects that this account has permission to manage, or grant this account the Owner permission for the objects to be synced (for example, run the GRANT <owner_of_the_object_to_be_synced> TO <source_database_account_for_the_task> command to allow this account to perform related operations as the owner of the objects to be synced).

      Note

      An account with `cloudsqlsuperuser` permissions cannot manage data whose owner is another account with `cloudsqlsuperuser` permissions.

  • Synchronization from a self-managed PostgreSQL database to an ApsaraDB RDS for PostgreSQL instance

    Type

    Description

    Source database limits

    • The tables to be synchronized must have a primary key or a UNIQUE constraint, and the fields must be unique. Otherwise, duplicate data may occur in the destination database.

      Note

      If the destination table is not created by DTS (that is, the Synchronization Types is not set to Schema Synchronization), you must ensure that the table has the same primary key or non-null UNIQUE constraint as the table to be synchronized from the source database. Otherwise, duplicate data may occur in the destination database.

    • The name of the database to be synchronized cannot contain hyphens (-), for example, dts-testdata.

    • If you synchronize data at the table level and need to edit objects, such as by mapping table or column names, a single task supports a maximum of 5,000 tables. To synchronize more tables, split them into multiple tasks or configure a task to synchronize the entire database. Otherwise, a request error may occur after you submit the task.

    • DTS does not support synchronizing temporary tables, internal system triggers, or some functions (C-language functions and internal functions for PROCEDURE and FUNCTION) from the source database. DTS supports synchronizing some custom data types (COMPOSITE, ENUM, or RANGE) and the following constraints: primary key, foreign key, unique, and CHECK.

    • Write-ahead log (WAL):

      • The WAL feature must be enabled. Set the wal_level parameter to logical.

      • For an incremental synchronization task, DTS requires that the WAL logs of the source database are retained for more than 24 hours. For a task that includes both full and incremental synchronization, DTS requires that the WAL logs are retained for at least 7 days. After the full synchronization is complete, you can change the log retention period to more than 24 hours. If the retention period is shorter than required, the DTS task may fail because it cannot obtain the WAL logs. In extreme cases, data inconsistency or data loss may occur. Issues caused by a log retention period shorter than the required period are not covered by the DTS Service-Level Agreement (SLA).

    • If a failover occurs in the self-managed PostgreSQL database, the synchronization fails.

    • Make sure that the values of the `max_wal_senders` and `max_replication_slots` parameters are greater than the sum of the number of replication slots currently in use and the number of DTS instances to be created with this self-managed PostgreSQL database as the source.

    • If the source instance has long-running transactions and the task includes incremental synchronization, the write-ahead log (WAL) that occurs before the long-running transactions are committed cannot be cleared. This may cause the WAL logs to accumulate and exhaust the disk space of the source instance.

    • When the source instance is Google Cloud Platform Cloud SQL for PostgreSQL, you must provide an account that has the `cloudsqlsuperuser` permission for the Database Account of the source database. When you select objects to sync, select objects that this account has permission to manage, or grant this account the Owner permission for the objects to be synced (for example, run the GRANT <owner_of_the_object_to_be_synced> TO <source_database_account_for_the_task> command to allow this account to perform related operations as the owner of the objects to be synced).

      Note

      An account with `cloudsqlsuperuser` permissions cannot manage data whose owner is another account with `cloudsqlsuperuser` permissions.

    • Due to the limits of logical replication in the source database, if a single piece of data to be synchronized exceeds 256 MB after an incremental change, the synchronization instance may fail and cannot be recovered. You must reconfigure the synchronization instance.

    • During schema synchronization and full data synchronization, do not perform Data Definition Language (DDL) operations that change the schema of databases or tables. Otherwise, the data synchronization task fails.

      Note

      During the full synchronization phase, DTS queries the source database, which acquires metadata locks. This may block DDL operations on the source database.

    • If you perform a major engine version upgrade on the source database while the synchronization instance is running, the instance fails and cannot be recovered. You must reconfigure the synchronization instance.

    Other limits

    • A single data synchronization task can synchronize only one database. To synchronize multiple databases, you must configure a separate task for each database.

    • DTS does not support synchronizing TimescaleDB extension tables, tables with cross-schema inheritance, or tables with unique indexes based on expressions.

    • Schemas created by installing extensions are not supported. You cannot find information about these schemas in the console when you configure the task.

    • If a table to be synced contains a field of the SERIAL type, the source database automatically creates a Sequence for that field. Therefore, when you configure Source Objects, if you select Schema Synchronization for the Synchronization Types, we recommend that you also select Sequence or synchronize the entire schema. Otherwise, the synchronization instance may fail to run.

    • In the following three scenarios, you must run the ALTER TABLE schema.table REPLICA IDENTITY FULL; command on the tables to be synchronized in the source database before you write data to them. This ensures data consistency. Do not lock the tables while running this command to prevent deadlocks. If you skip the related precheck items, DTS automatically runs this command during the initialization of the instance.

      • When the instance runs for the first time.

      • When you select Schema as the granularity for object selection, and a new table is created in the schema or a table to be synchronized is rebuilt using the RENAME command.

      • When you use the feature to modify synchronization objects.

      Note
      • In the command, replace schema and table with the actual schema name and table name.

      • We recommend that you perform this operation during off-peak hours.

    • DTS validates data content but does not support validation for metadata such as sequences. You must validate the metadata yourself.

    • After you switch your business to the destination instance, new sequences do not increment from the maximum value of the source sequence. You must update the sequence value in the destination database before the business switchover. For more information, see Update the sequence value in the destination database.

    • DTS creates the following temporary tables in the source database to obtain DDL statements for incremental data, the structure of incremental tables, and heartbeat information. Do not delete these temporary tables during synchronization. Otherwise, the DTS task becomes abnormal. The temporary tables are automatically deleted after the DTS instance is released.

      public.dts_pg_class, public.dts_pg_attribute, public.dts_pg_type, public.dts_pg_enum, public.dts_postgres_heartbeat, public.dts_ddl_command, public.dts_args_session, and public.aliyun_dts_instance.

    • To ensure the accuracy of the displayed synchronization latency, DTS adds a heartbeat table named dts_postgres_heartbeat to the source database.

    • During data synchronization, DTS creates a replication slot with the dts_sync_ prefix in the source database to replicate data. This replication slot allows DTS to obtain incremental logs from the source database within the last 15 minutes. When the data synchronization fails or the synchronization instance is released, DTS attempts to automatically clear the replication slot.

      Note
      • If you change the password of the source database account used by the task or delete the DTS IP address from the whitelist of the source database during data synchronization, the replication slot cannot be automatically cleared. In this case, you must manually clear the replication slot in the source database. This prevents the slot from continuously accumulating and consuming disk space, which can make the source database unavailable.

      • If a failover occurs in the source database, you must log on to the secondary database to manually clear the slot.

      Amazon slot查询信息

    • Before you synchronize data, evaluate the performance of the source and destination databases. We recommend that you synchronize data during off-peak hours. Otherwise, initial full data synchronization consumes read and write resources on both the source and destination databases, which may increase the database load.

    • Initial full data synchronization runs concurrent INSERT operations. This causes table fragmentation in the destination database. As a result, the tablespace of the destination instance is larger than that of the source instance after the initial full data synchronization is complete.

    • For table-level data synchronization, if no data other than the data from DTS is written to the destination database, you can use Data Management (DMS) to perform online DDL operations. For more information, see Perform schema changes without locking tables.

    • During DTS synchronization, do not write data from other sources to the destination database. This can cause data inconsistency between the source and destination databases. For example, if you use DMS to perform online DDL operations while data from other sources is being written to the destination database, data loss may occur in the destination database.

    • For a full or incremental synchronization task, if the tables to be synchronized in the source database contain foreign keys, triggers, or event triggers, DTS temporarily sets the `session_replication_role` parameter to `replica` at the session level if the destination database account is a privileged account or has superuser permissions. If the destination database account does not have these permissions, you must manually set the `session_replication_role` parameter to `replica` in the destination database. During this period (while `session_replication_role` is `replica`), if cascade update or delete operations occur in the source database, data inconsistency may occur. After the DTS task is released, you can change the `session_replication_role` parameter back to `origin`.

    • If the task fails, DTS technical support will attempt to recover it within 8 hours. During the recovery process, operations such as restarting the task or adjusting its parameters may be performed.

      Note

      When parameters are adjusted, only DTS task parameters are modified. Database parameters remain unchanged.The parameters that may be modified include but are not limited to those described in Modify instance parameters.

    • When you synchronize partitioned tables, you must include both the parent table and its child partitions as synchronization objects. Otherwise, data inconsistency may occur for the partitioned table.

      Note

      The parent table of a PostgreSQL partitioned table does not directly store data. All data is stored in the child partitions. The synchronization task must include the parent table and all its child partitions. Otherwise, data from the child partitions may not be synchronized, leading to data inconsistency between the source and destination.

  • Two-way synchronization between ApsaraDB RDS for PostgreSQL instances

    Type

    Description

    Source and destination database limits

    • If a table to be synchronized does not have a primary key or a UNIQUE constraint, you must enable the Exactly-Once write feature when you configure the task. Otherwise, duplicate data may occur in the destination database. For more information, see Synchronize tables without primary keys or UNIQUE constraints.

    • The name of the database to be synchronized cannot contain hyphens (-), for example, dts-testdata.

    • If you synchronize data at the table level and need to edit objects, such as by mapping table or column names, a single task supports a maximum of 5,000 tables. To synchronize more tables, split them into multiple tasks or configure a task to synchronize the entire database. Otherwise, a request error may occur after you submit the task.

    • DTS does not support synchronizing temporary tables, internal system triggers, or some functions (C-language functions and internal functions for PROCEDURE and FUNCTION) from the source database. DTS supports synchronizing some custom data types (COMPOSITE, ENUM, or RANGE) and the following constraints: primary key, foreign key, unique, and CHECK.

    • Write-ahead log (WAL):

      • The WAL feature must be enabled. Set the wal_level parameter to logical.

      • For an incremental synchronization task, DTS requires that the WAL logs of the source database are retained for more than 24 hours. For a task that includes both full and incremental synchronization, DTS requires that the WAL logs are retained for at least 7 days. After the full synchronization is complete, you can change the log retention period to more than 24 hours. If the retention period is shorter than required, the DTS task may fail because it cannot obtain the WAL logs. In extreme cases, data inconsistency or data loss may occur. Issues caused by a log retention period shorter than the required period are not covered by the DTS Service-Level Agreement (SLA).

    • Source database operation limits

      • To ensure that the synchronization task runs as expected and to prevent logical subscription interruptions caused by failover, you must enable Logical Replication Slot Failover for the ApsaraDB RDS for PostgreSQL instance. For more information, see Logical Replication Slot Failover.

      • Due to the limits of logical replication in the source database, if a single piece of data to be synchronized exceeds 256 MB after an incremental change, the synchronization instance may fail and cannot be recovered. You must reconfigure the synchronization instance.

      • During schema synchronization and full data synchronization, do not perform Data Definition Language (DDL) operations that change the schema of databases or tables. Otherwise, the data synchronization task fails.

        Note

        During the full synchronization phase, DTS queries the source database, which acquires metadata locks. This may block DDL operations on the source database.

    • If the source instance has long-running transactions and the task includes incremental synchronization, the write-ahead log (WAL) that occurs before the long-running transactions are committed cannot be cleared. This may cause the WAL logs to accumulate and exhaust the disk space of the source instance.

    • If you perform a major engine version upgrade on the source database while the synchronization instance is running, the instance fails and cannot be recovered. You must reconfigure the synchronization instance.

    Other limits

    • A single data synchronization task can synchronize only one database. To synchronize multiple databases, you must configure a separate task for each database.

    • DTS does not support synchronizing TimescaleDB extension tables, tables with cross-schema inheritance, or tables with unique indexes based on expressions.

    • Schemas created by installing extensions are not supported. You cannot find information about these schemas in the console when you configure the task.

    • If a table to be synced contains a field of the SERIAL type, the source database automatically creates a Sequence for that field. Therefore, when you configure Source Objects, if you select Schema Synchronization for the Synchronization Types, we recommend that you also select Sequence or synchronize the entire schema. Otherwise, the synchronization instance may fail to run.

    • In the following three scenarios, you must run the ALTER TABLE schema.table REPLICA IDENTITY FULL; command on the tables to be synchronized in the source database before you write data to them. This ensures data consistency. Do not lock the tables while running this command to prevent deadlocks. If you skip the related precheck items, DTS automatically runs this command during the initialization of the instance.

      • When the instance runs for the first time.

      • When you select Schema as the granularity for object selection, and a new table is created in the schema or a table to be synchronized is rebuilt using the RENAME command.

      • When you use the feature to modify synchronization objects.

      Note
      • In the command, replace schema and table with the actual schema name and table name.

      • We recommend that you perform this operation during off-peak hours.

    • DTS validates data content but does not support validation for metadata such as sequences. You must validate the metadata yourself.

    • After you switch your business to the destination instance, new sequences do not increment from the maximum value of the source sequence. Before the business switchover, you must query the maximum value of the corresponding sequence in the source database and then set it as the initial value for the corresponding sequence in the destination database. The following command queries the sequence values in the source database:

      do language plpgsql $$
      declare
        nsp name;
        rel name;
        val int8;
      begin
        for nsp,rel in select nspname,relname from pg_class t2 , pg_namespace t3 where t2.relnamespace=t3.oid and t2.relkind='S'
        loop
          execute format($_$select last_value from %I.%I$_$, nsp, rel) into val;
          raise notice '%',
          format($_$select setval('%I.%I'::regclass, %s);$_$, nsp, rel, val+1);
        end loop;
      end;
      $$;
      Note

      The SQL statements that are returned after you run the command contain all sequences of the source database. Run the SQL statements in the destination database as needed.

    • DTS creates the following temporary tables in the source database to obtain DDL statements for incremental data, the structure of incremental tables, and heartbeat information. Do not delete these temporary tables during synchronization. Otherwise, the DTS task becomes abnormal. The temporary tables are automatically deleted after the DTS instance is released.

      public.dts_pg_class, public.dts_pg_attribute, public.dts_pg_type, public.dts_pg_enum, public.dts_postgres_heartbeat, public.dts_ddl_command, public.dts_args_session, and public.aliyun_dts_instance.

    • To ensure the accuracy of the displayed synchronization latency, DTS adds a heartbeat table named dts_postgres_heartbeat to the source database.

    • During data synchronization, DTS creates a replication slot with the dts_sync_ prefix in the source database to replicate data. This replication slot allows DTS to obtain incremental logs from the source database within the last 15 minutes. When the data synchronization fails or the synchronization instance is released, DTS attempts to automatically clear the replication slot.

      Note
      • If you change the password of the source database account used by the task or delete the DTS IP address from the whitelist of the source database during data synchronization, the replication slot cannot be automatically cleared. In this case, you must manually clear the replication slot in the source database. This prevents the slot from continuously accumulating and consuming disk space, which can make the source database unavailable.

      • If a failover occurs in the source database, you must log on to the secondary database to manually clear the slot.

      Amazon slot查询信息

    • Before you synchronize data, evaluate the performance of the source and destination databases. We recommend that you synchronize data during off-peak hours. Otherwise, initial full data synchronization consumes read and write resources on both the source and destination databases, which may increase the database load.

    • Initial full data synchronization runs concurrent INSERT operations. This causes table fragmentation in the destination database. As a result, the tablespace of the destination instance is larger than that of the source instance after the initial full data synchronization is complete.

    • For table-level data synchronization, if no data other than the data from DTS is written to the destination database, you can use Data Management (DMS) to perform online DDL operations. For more information, see Perform schema changes without locking tables.

    • During DTS synchronization, do not write data from other sources to the destination database. This can cause data inconsistency between the source and destination databases. For example, if you use DMS to perform online DDL operations while data from other sources is being written to the destination database, data loss may occur in the destination database.

    • For a full or incremental synchronization task, if the tables to be synchronized in the source database contain foreign keys, triggers, or event triggers, DTS temporarily sets the `session_replication_role` parameter to `replica` at the session level if the destination database account is a privileged account or has superuser permissions. If the destination database account does not have these permissions, you must manually set the `session_replication_role` parameter to `replica` in the destination database. During this period (while `session_replication_role` is `replica`), if cascade update or delete operations occur in the source database, data inconsistency may occur. After the DTS task is released, you can change the `session_replication_role` parameter back to `origin`.

    • A two-way synchronization instance includes a forward task and a reverse task. When you configure or reset a two-way synchronization instance, if the destination object of one task is the source object of the other task:

      • Only one task can synchronize full and incremental data. The other task can only synchronize incremental data.

      • The source data of the current task can only be synchronized to the destination of the current task. The synchronized data will not be used as the source data for the other task.

    • If the task fails, DTS technical support will attempt to recover it within 8 hours. During the recovery process, operations such as restarting the task or adjusting its parameters may be performed.

      Note

      When parameters are adjusted, only DTS task parameters are modified. Database parameters remain unchanged.The parameters that may be modified include but are not limited to those described in Modify instance parameters.

    • When you synchronize partitioned tables, you must include both the parent table and its child partitions as synchronization objects. Otherwise, data inconsistency may occur for the partitioned table.

      Note

      The parent table of a PostgreSQL partitioned table does not directly store data. All data is stored in the child partitions. The synchronization task must include the parent table and all its child partitions. Otherwise, data from the child partitions may not be synchronized, leading to data inconsistency between the source and destination.

    Special cases

    • When the source instance is an ApsaraDB RDS for PostgreSQL instance

      During synchronization, do not change the endpoint or zone of the ApsaraDB RDS for PostgreSQL instance. Otherwise, the synchronization fails.

    • When the source instance is a self-managed PostgreSQL database

      Make sure that the values of the `max_wal_senders` and `max_replication_slots` parameters are greater than the sum of the number of replication slots currently in use and the number of DTS instances to be created with this self-managed PostgreSQL database as the source.

    • When the source instance is Google Cloud Platform Cloud SQL for PostgreSQL, you must provide an account that has the `cloudsqlsuperuser` permission for the Database Account of the source database. When you select objects to sync, select objects that this account has permission to manage, or grant this account the Owner permission for the objects to be synced (for example, run the GRANT <owner_of_the_object_to_be_synced> TO <source_database_account_for_the_task> command to allow this account to perform related operations as the owner of the objects to be synced).

      Note

      An account with `cloudsqlsuperuser` permissions cannot manage data whose owner is another account with `cloudsqlsuperuser` permissions.

Synchronization from a self-managed PostgreSQL database or an ApsaraDB RDS for PostgreSQL instance to an AnalyticDB for PostgreSQL instance

Type

Description

Source database limits

  • The tables to be synchronized must have a primary key or a UNIQUE constraint, and the fields must be unique. Otherwise, duplicate data may occur in the destination database.

    Note

    If the destination table is not created by DTS (that is, the Synchronization Types is not set to Schema Synchronization), you must ensure that the table has the same primary key or non-null UNIQUE constraint as the table to be synchronized from the source database. Otherwise, duplicate data may occur in the destination database.

  • The name of the database to be synchronized cannot contain hyphens (-), for example, dts-testdata.

  • If you synchronize data at the table level and need to edit objects, such as by mapping table or column names, a single task supports a maximum of 5,000 tables. To synchronize more tables, split them into multiple tasks or configure a task to synchronize the entire database. Otherwise, a request error may occur after you submit the task.

  • Write-ahead log (WAL):

    • The WAL feature must be enabled. Set the wal_level parameter to logical.

    • For an incremental synchronization task, DTS requires that the WAL logs of the source database are retained for more than 24 hours. For a task that includes both full and incremental synchronization, DTS requires that the WAL logs are retained for at least 7 days. After the full synchronization is complete, you can change the log retention period to more than 24 hours. If the retention period is shorter than required, the DTS task may fail because it cannot obtain the WAL logs. In extreme cases, data inconsistency or data loss may occur. Issues caused by a log retention period shorter than the required period are not covered by the DTS Service-Level Agreement (SLA).

  • If a failover occurs in the self-managed PostgreSQL database, the synchronization fails.

  • Source database operation limits

    • To ensure that the synchronization task runs as expected and to prevent logical subscription interruptions caused by failover, you must enable Logical Replication Slot Failover for the ApsaraDB RDS for PostgreSQL instance. For more information, see Logical Replication Slot Failover.

    • Due to the limits of logical replication in the source database, if a single piece of data to be synchronized exceeds 256 MB after an incremental change, the synchronization instance may fail and cannot be recovered. You must reconfigure the synchronization instance.

    • During schema synchronization and full data synchronization, do not perform Data Definition Language (DDL) operations that change the schema of databases or tables. Otherwise, the data synchronization task fails.

      Note

      During the full synchronization phase, DTS queries the source database, which acquires metadata locks. This may block DDL operations on the source database.

  • If the source instance has long-running transactions and the task includes incremental synchronization, the write-ahead log (WAL) that occurs before the long-running transactions are committed cannot be cleared. This may cause the WAL logs to accumulate and exhaust the disk space of the source instance.

  • If you perform a major engine version upgrade on the source database while the synchronization instance is running, the instance fails and cannot be recovered. You must reconfigure the synchronization instance.

Other limits

  • Append-optimized (AO) tables are not supported as destination tables.

  • If you use column mapping for partial table synchronization or if the source and destination table schemas are different, data in the columns that exist in the source but not in the destination will be lost.

  • Synchronization object requirements:

    • Only table-level synchronization is supported.

    • DTS does not support synchronizing TimescaleDB extension tables, tables with cross-schema inheritance, or tables with unique indexes based on expressions.

    • Schemas created by installing extensions are not supported. You cannot find information about these schemas in the console when you configure the task.

    • Synchronization of the following data types is not supported: BIT, VARBIT, GEOMETRY, UUID, TSQUERY, TSVECTOR, and TXID_SNAPSHOT.

    • Synchronization of the following objects is not supported: DATATYPE, SEQUENCE, INDEX, PROCEDURE, FUNCTION, VIEW, OPERATOR, DEFAULT_CONSTRAINT, UK, PK, RULE, DOMAIN, AGGREGATE, EXTENSION, FK, and TRIGGER.

  • If the table to be synchronized has a primary key, the primary key column of the destination table must be the same as that of the source table. If the table to be synchronized does not have a primary key, the primary key column of the destination table must be the same as the distribution key.

  • The unique key (including the primary key column) of the destination table must contain all columns of the distribution key.

  • A single data synchronization task can synchronize only one database. To synchronize multiple databases, you must configure a separate task for each database.

  • In the following three scenarios, you must run the ALTER TABLE schema.table REPLICA IDENTITY FULL; command on the tables to be synchronized in the source database before you write data to them. This ensures data consistency. Do not lock the tables while running this command to prevent deadlocks. If you skip the related precheck items, DTS automatically runs this command during the initialization of the instance.

    • When the instance runs for the first time.

    • When you select Schema as the granularity for object selection, and a new table is created in the schema or a table to be synchronized is rebuilt using the RENAME command.

    • When you use the feature to modify synchronization objects.

    Note
    • In the command, replace schema and table with the actual schema name and table name.

    • We recommend that you perform this operation during off-peak hours.

  • DTS creates the following temporary tables in the source database to obtain DDL statements for incremental data, the structure of incremental tables, and heartbeat information. Do not delete these temporary tables during synchronization. Otherwise, the DTS task becomes abnormal. The temporary tables are automatically deleted after the DTS instance is released.

    public.dts_pg_class, public.dts_pg_attribute, public.dts_pg_type, public.dts_pg_enum, public.dts_postgres_heartbeat, public.dts_ddl_command, public.dts_args_session, and public.aliyun_dts_instance.

  • To ensure the accuracy of the displayed synchronization latency, DTS adds a heartbeat table named dts_postgres_heartbeat to the source database.

  • During data synchronization, DTS creates a replication slot with the dts_sync_ prefix in the source database to replicate data. This replication slot allows DTS to obtain incremental logs from the source database within the last 15 minutes. When the data synchronization fails or the synchronization instance is released, DTS attempts to automatically clear the replication slot.

    Note
    • If you change the password of the source database account used by the task or delete the DTS IP address from the whitelist of the source database during data synchronization, the replication slot cannot be automatically cleared. In this case, you must manually clear the replication slot in the source database. This prevents the slot from continuously accumulating and consuming disk space, which can make the source database unavailable.

    • If a failover occurs in the source database, you must log on to the secondary database to manually clear the slot.

    Amazon slot查询信息

  • Before you synchronize data, evaluate the performance of the source and destination databases. We recommend that you synchronize data during off-peak hours. Otherwise, initial full data synchronization consumes read and write resources on both the source and destination databases, which may increase the database load.

  • Initial full data synchronization runs concurrent INSERT operations. This causes table fragmentation in the destination database. As a result, the tablespace of the destination instance is larger than that of the source instance after the initial full data synchronization is complete.

  • For table-level data synchronization, if no data other than the data from DTS is written to the destination database, you can use Data Management (DMS) to perform online DDL operations. For more information, see Perform schema changes without locking tables.

  • During DTS synchronization, do not write data from other sources to the destination database. This can cause data inconsistency between the source and destination databases. For example, if you use DMS to perform online DDL operations while data from other sources is being written to the destination database, data loss may occur in the destination database.

  • If the task fails, DTS technical support will attempt to recover it within 8 hours. During the recovery process, operations such as restarting the task or adjusting its parameters may be performed.

    Note

    When parameters are adjusted, only DTS task parameters are modified. Database parameters remain unchanged.The parameters that may be modified include but are not limited to those described in Modify instance parameters.

  • When you synchronize partitioned tables, you must include both the parent table and its child partitions as synchronization objects. Otherwise, data inconsistency may occur for the partitioned table.

    Note

    The parent table of a PostgreSQL partitioned table does not directly store data. All data is stored in the child partitions. The synchronization task must include the parent table and all its child partitions. Otherwise, data from the child partitions may not be synchronized, leading to data inconsistency between the source and destination.

Special cases

  • When the source instance is an ApsaraDB RDS for PostgreSQL instance

    During synchronization, do not change the endpoint or zone of the ApsaraDB RDS for PostgreSQL instance. Otherwise, the synchronization fails.

  • When the source instance is a self-managed PostgreSQL database

    Make sure that the values of the `max_wal_senders` and `max_replication_slots` parameters are greater than the sum of the number of replication slots currently in use and the number of DTS instances to be created with this self-managed PostgreSQL database as the source.

  • When the source instance is Google Cloud Platform Cloud SQL for PostgreSQL, you must provide an account that has the `cloudsqlsuperuser` permission for the Database Account of the source database. When you select objects to sync, select objects that this account has permission to manage, or grant this account the Owner permission for the objects to be synced (for example, run the GRANT <owner_of_the_object_to_be_synced> TO <source_database_account_for_the_task> command to allow this account to perform related operations as the owner of the objects to be synced).

    Note

    An account with `cloudsqlsuperuser` permissions cannot manage data whose owner is another account with `cloudsqlsuperuser` permissions.

Synchronization from a self-managed PostgreSQL database to a PolarDB for PostgreSQL (Compatible with Oracle) instance

Type

Description

Source database limits

  • The tables to be synchronized must have a primary key or a UNIQUE constraint, and the fields must be unique. Otherwise, duplicate data may occur in the destination database.

    Note

    If the destination table is not created by DTS (that is, the Synchronization Types is not set to Schema Synchronization), you must ensure that the table has the same primary key or non-null UNIQUE constraint as the table to be synchronized from the source database. Otherwise, duplicate data may occur in the destination database.

  • If you synchronize data at the table level and need to edit objects, such as by mapping table or column names, a single task supports a maximum of 5,000 tables. To synchronize more tables, split them into multiple tasks or configure a task to synchronize the entire database. Otherwise, a request error may occur after you submit the task.

  • The name of the database to be synchronized cannot contain hyphens (-), for example, dts-testdata.

  • Write-ahead log (WAL):

    • The WAL feature must be enabled. Set the wal_level parameter to logical.

    • For an incremental synchronization task, DTS requires that the WAL logs of the source database are retained for more than 24 hours. For a task that includes both full and incremental synchronization, DTS requires that the WAL logs are retained for at least 7 days. After the full synchronization is complete, you can change the log retention period to more than 24 hours. If the retention period is shorter than required, the DTS task may fail because it cannot obtain the WAL logs. In extreme cases, data inconsistency or data loss may occur. Issues caused by a log retention period shorter than the required period are not covered by the DTS Service-Level Agreement (SLA).

  • If a failover occurs in the self-managed PostgreSQL database, the synchronization fails.

  • Make sure that the values of the `max_wal_senders` and `max_replication_slots` parameters are greater than the sum of the number of replication slots currently in use and the number of DTS instances to be created with this self-managed PostgreSQL database as the source.

  • If the source instance has long-running transactions and the task includes incremental synchronization, the write-ahead log (WAL) that occurs before the long-running transactions are committed cannot be cleared. This may cause the WAL logs to accumulate and exhaust the disk space of the source instance.

  • When the source instance is Google Cloud Platform Cloud SQL for PostgreSQL, you must provide an account that has the `cloudsqlsuperuser` permission for the Database Account of the source database. When you select objects to sync, select objects that this account has permission to manage, or grant this account the Owner permission for the objects to be synced (for example, run the GRANT <owner_of_the_object_to_be_synced> TO <source_database_account_for_the_task> command to allow this account to perform related operations as the owner of the objects to be synced).

    Note

    An account with `cloudsqlsuperuser` permissions cannot manage data whose owner is another account with `cloudsqlsuperuser` permissions.

  • Due to the limits of logical replication in the source database, if a single piece of data to be synchronized exceeds 256 MB after an incremental change, the synchronization instance may fail and cannot be recovered. You must reconfigure the synchronization instance.

  • During schema synchronization and full data synchronization, do not perform Data Definition Language (DDL) operations that change the schema of databases or tables. Otherwise, the data synchronization task fails.

    Note

    During the full synchronization phase, DTS queries the source database, which acquires metadata locks. This may block DDL operations on the source database.

  • If you perform a major engine version upgrade on the source database while the synchronization instance is running, the instance fails and cannot be recovered. You must reconfigure the synchronization instance.

Other limits

  • If your task is a full synchronization or incremental synchronization, and the tables to be synchronized in the source database contain foreign keys, triggers, or event triggers, DTS temporarily sets the session_replication_role parameter to replica at the session level during full or incremental synchronization if the destination database account is a privileged account. If the destination database account does not have this permission, you must manually set the session_replication_role parameter to replica in the destination database.During this period—when the session_replication_role parameter is set to replica during full or incremental synchronization—if the source database has cascading update or delete operations, data inconsistency may occur. After the DTS sync task is released, you can change the session_replication_role parameter back to origin.

  • If a table to be synced contains a field of the SERIAL type, the source database automatically creates a Sequence for that field. Therefore, when you configure Source Objects, if you select Schema Synchronization for the Synchronization Types, we recommend that you also select Sequence or synchronize the entire schema. Otherwise, the synchronization instance may fail to run.

  • A single data synchronization task can synchronize only one database. To synchronize multiple databases, you must configure a separate task for each database.

  • DTS does not support synchronizing TimescaleDB extension tables, tables with cross-schema inheritance, or tables with unique indexes based on expressions.

  • Schemas created by installing extensions are not supported. You cannot find information about these schemas in the console when you configure the task.

  • In the following three scenarios, you must run the ALTER TABLE schema.table REPLICA IDENTITY FULL; command on the tables to be synchronized in the source database before you write data to them. This ensures data consistency. Do not lock the tables while running this command to prevent deadlocks. If you skip the related precheck items, DTS automatically runs this command during the initialization of the instance.

    • When the instance runs for the first time.

    • When you select Schema as the granularity for object selection, and a new table is created in the schema or a table to be synchronized is rebuilt using the RENAME command.

    • When you use the feature to modify synchronization objects.

    Note
    • In the command, replace schema and table with the actual schema name and table name.

    • We recommend that you perform this operation during off-peak hours.

  • DTS validates data content but does not support validation for metadata such as sequences. You must validate the metadata yourself.

  • After you switch your business to the destination instance, new sequences do not increment from the maximum value of the source sequence. You must update the sequence value in the destination database before the business switchover. For more information, see Update the sequence value in the destination database.

  • DTS creates the following temporary tables in the source database to obtain DDL statements for incremental data, the structure of incremental tables, and heartbeat information. Do not delete these temporary tables during synchronization. Otherwise, the DTS task becomes abnormal. The temporary tables are automatically deleted after the DTS instance is released.

    public.dts_pg_class, public.dts_pg_attribute, public.dts_pg_type, public.dts_pg_enum, public.dts_postgres_heartbeat, public.dts_ddl_command, public.dts_args_session, and public.aliyun_dts_instance.

  • To ensure the accuracy of the displayed synchronization latency, DTS adds a heartbeat table named dts_postgres_heartbeat to the source database.

  • During data synchronization, DTS creates a replication slot with the dts_sync_ prefix in the source database to replicate data. This replication slot allows DTS to obtain incremental logs from the source database within the last 15 minutes. When the data synchronization fails or the synchronization instance is released, DTS attempts to automatically clear the replication slot.

    Note
    • If you change the password of the source database account used by the task or delete the DTS IP address from the whitelist of the source database during data synchronization, the replication slot cannot be automatically cleared. In this case, you must manually clear the replication slot in the source database. This prevents the slot from continuously accumulating and consuming disk space, which can make the source database unavailable.

    • If a failover occurs in the source database, you must log on to the secondary database to manually clear the slot.

    Amazon slot查询信息

  • Before you synchronize data, evaluate the performance of the source and destination databases. We recommend that you synchronize data during off-peak hours. Otherwise, initial full data synchronization consumes read and write resources on both the source and destination databases, which may increase the database load.

  • Initial full data synchronization runs concurrent INSERT operations. This causes table fragmentation in the destination database. As a result, the tablespace of the destination instance is larger than that of the source instance after the initial full data synchronization is complete.

  • For table-level data synchronization, if no data other than the data from DTS is written to the destination database, you can use Data Management (DMS) to perform online DDL operations. For more information, see Perform schema changes without locking tables.

  • During DTS synchronization, do not write data from other sources to the destination database. This can cause data inconsistency between the source and destination databases. For example, if you use DMS to perform online DDL operations while data from other sources is being written to the destination database, data loss may occur in the destination database.

  • If the task fails, DTS technical support will attempt to recover it within 8 hours. During the recovery process, operations such as restarting the task or adjusting its parameters may be performed.

    Note

    When parameters are adjusted, only DTS task parameters are modified. Database parameters remain unchanged.The parameters that may be modified include but are not limited to those described in Modify instance parameters.

  • When you synchronize partitioned tables, you must include both the parent table and its child partitions as synchronization objects. Otherwise, data inconsistency may occur for the partitioned table.

    Note

    The parent table of a PostgreSQL partitioned table does not directly store data. All data is stored in the child partitions. The synchronization task must include the parent table and all its child partitions. Otherwise, data from the child partitions may not be synchronized, leading to data inconsistency between the source and destination.

Synchronization from an ApsaraDB RDS for PostgreSQL instance to an ApsaraDB RDS for MySQL instance

Type

Description

Source database limits

  • The tables to be synchronized must have a primary key or a UNIQUE constraint, and the fields must be unique. Otherwise, duplicate data may occur in the destination database.

    Note

    If the destination table is not created by DTS (that is, the Synchronization Types is not set to Schema Synchronization), you must ensure that the table has the same primary key or non-null UNIQUE constraint as the table to be synchronized from the source database. Otherwise, duplicate data may occur in the destination database.

  • The name of the database to be synchronized cannot contain hyphens (-), for example, dts-testdata.

  • If you synchronize data at the table level and need to edit objects, such as by mapping table or column names, a single task supports a maximum of 5,000 tables. To synchronize more tables, split them into multiple tasks or configure a task to synchronize the entire database. Otherwise, a request error may occur after you submit the task.

  • Write-ahead log (WAL):

    • The WAL feature must be enabled. Set the wal_level parameter to logical.

    • For an incremental synchronization task, DTS requires that the WAL logs of the source database are retained for more than 24 hours. For a task that includes both full and incremental synchronization, DTS requires that the WAL logs are retained for at least 7 days. After the full synchronization is complete, you can change the log retention period to more than 24 hours. If the retention period is shorter than required, the DTS task may fail because it cannot obtain the WAL logs. In extreme cases, data inconsistency or data loss may occur. Issues caused by a log retention period shorter than the required period are not covered by the DTS Service-Level Agreement (SLA).

  • Source database operation limits

    • To ensure that the synchronization task runs as expected and to prevent logical subscription interruptions caused by failover, you must enable Logical Replication Slot Failover for the ApsaraDB RDS for PostgreSQL instance. For more information, see Logical Replication Slot Failover.

    • Due to the limits of logical replication in the source database, if a single piece of data to be synchronized exceeds 256 MB after an incremental change, the synchronization instance may fail and cannot be recovered. You must reconfigure the synchronization instance.

    • During schema synchronization and full data synchronization, do not perform Data Definition Language (DDL) operations that change the schema of databases or tables. Otherwise, the data synchronization task fails.

      Note

      During the full synchronization phase, DTS queries the source database, which acquires metadata locks. This may block DDL operations on the source database.

  • If the source instance has long-running transactions and the task includes incremental synchronization, the write-ahead log (WAL) that occurs before the long-running transactions are committed cannot be cleared. This may cause the WAL logs to accumulate and exhaust the disk space of the source instance.

  • If you perform a major engine version upgrade on the source database while the synchronization instance is running, the instance fails and cannot be recovered. You must reconfigure the synchronization instance.

Other limits

  • A single data synchronization task can synchronize only one database. To synchronize multiple databases, you must configure a separate task for each database.

  • DTS does not support synchronizing TimescaleDB extension tables, tables with cross-schema inheritance, or tables with unique indexes based on expressions.

  • Schemas created by installing extensions are not supported. You cannot find information about these schemas in the console when you configure the task.

  • Synchronization of the following objects is not supported: DATATYPE, VIEW, PROCEDURE, FUNCTION, SEQUENCE, EXTENSION, OPERATOR, RULE, DEFAULT_CONSTRAINT, and TRIGGER.

  • In the following three scenarios, you must run the ALTER TABLE schema.table REPLICA IDENTITY FULL; command on the tables to be synchronized in the source database before you write data to them. This ensures data consistency. Do not lock the tables while running this command to prevent deadlocks. If you skip the related precheck items, DTS automatically runs this command during the initialization of the instance.

    • When the instance runs for the first time.

    • When you select Schema as the granularity for object selection, and a new table is created in the schema or a table to be synchronized is rebuilt using the RENAME command.

    • When you use the feature to modify synchronization objects.

    Note
    • In the command, replace schema and table with the actual schema name and table name.

    • We recommend that you perform this operation during off-peak hours.

  • DTS creates the following temporary tables in the source database to obtain DDL statements for incremental data (these DDL statements are not written to the destination database), the structure of incremental tables, and heartbeat information. Do not delete these temporary tables during synchronization. Otherwise, the DTS task becomes abnormal. The temporary tables are automatically deleted after the DTS instance is released.

    public.dts_pg_class, public.dts_pg_attribute, public.dts_pg_type, public.dts_pg_enum, public.dts_postgres_heartbeat, public.dts_ddl_command, public.dts_args_session, and public.aliyun_dts_instance.

  • To ensure the accuracy of the displayed synchronization latency, DTS adds a heartbeat table named dts_postgres_heartbeat to the source database.

  • During data synchronization, DTS creates a replication slot with the dts_sync_ prefix in the source database to replicate data. This replication slot allows DTS to obtain incremental logs from the source database within the last 15 minutes. When the data synchronization fails or the synchronization instance is released, DTS attempts to automatically clear the replication slot.

    Note
    • If you change the password of the source database account used by the task or delete the DTS IP address from the whitelist of the source database during data synchronization, the replication slot cannot be automatically cleared. In this case, you must manually clear the replication slot in the source database. This prevents the slot from continuously accumulating and consuming disk space, which can make the source database unavailable.

    • If a failover occurs in the source database, you must log on to the secondary database to manually clear the slot.

    Amazon slot查询信息

  • If the data to be synchronized contains content that requires 4-byte storage, such as uncommon characters or emojis, the destination database and tables must use the utf8mb4 charset.

    Note

    If you use DTS for schema synchronization, you must set the instance-level parameter character_set_server to utf8mb4 in the destination database.

  • Before you synchronize data, evaluate the performance of the source and destination databases. We recommend that you synchronize data during off-peak hours. Otherwise, initial full data synchronization consumes read and write resources on both the source and destination databases, which may increase the database load.

  • Initial full data synchronization runs concurrent INSERT operations. This causes table fragmentation in the destination database. As a result, the tablespace of the destination instance is larger than that of the source instance after the initial full data synchronization is complete.

  • For table-level data synchronization, if no data other than the data from DTS is written to the destination database, you can use Data Management (DMS) to perform online DDL operations. For more information, see Perform schema changes without locking tables.

  • During DTS synchronization, do not write data from other sources to the destination database. This can cause data inconsistency between the source and destination databases. For example, if you use DMS to perform online DDL operations while data from other sources is being written to the destination database, data loss may occur in the destination database.

  • If a DDL statement fails to be written to the destination database, the DTS task continues to run. You must check the task logs to view the failed DDL statement. For more information about how to view task logs, see Query task logs.

  • If source table columns that differ only in case are synchronized to the same destination MySQL table, the synchronization result may not be as expected. This is because MySQL column names are case-insensitive.

  • After the data synchronization is complete (the Status of the instance is Completed), we recommend that you run analyze table <table_name> to confirm that all data is written to the destination table. For example, if an HA switchover is triggered in the destination MySQL database, data might be written only to memory, which can result in data loss.

  • If the task fails, DTS technical support will attempt to recover it within 8 hours. During the recovery process, operations such as restarting the task or adjusting its parameters may be performed.

    Note

    When parameters are adjusted, only DTS task parameters are modified. Database parameters remain unchanged.The parameters that may be modified include but are not limited to those described in Modify instance parameters.

  • When you synchronize partitioned tables, you must include both the parent table and its child partitions as synchronization objects. Otherwise, data inconsistency may occur for the partitioned table.

    Note

    The parent table of a PostgreSQL partitioned table does not directly store data. All data is stored in the child partitions. The synchronization task must include the parent table and all its child partitions. Otherwise, data from the child partitions may not be synchronized, leading to data inconsistency between the source and destination.

Special cases

  • When the source instance is an ApsaraDB RDS for PostgreSQL instance

    During synchronization, do not change the endpoint or zone of the ApsaraDB RDS for PostgreSQL instance. Otherwise, the synchronization fails.

  • When the source instance is a self-managed PostgreSQL database

    Make sure that the values of the `max_wal_senders` and `max_replication_slots` parameters are greater than the sum of the number of replication slots currently in use and the number of DTS instances to be created with this self-managed PostgreSQL database as the source.

  • When the source instance is Google Cloud Platform Cloud SQL for PostgreSQL, you must provide an account that has the `cloudsqlsuperuser` permission for the Database Account of the source database. When you select objects to sync, select objects that this account has permission to manage, or grant this account the Owner permission for the objects to be synced (for example, run the GRANT <owner_of_the_object_to_be_synced> TO <source_database_account_for_the_task> command to allow this account to perform related operations as the owner of the objects to be synced).

    Note

    An account with `cloudsqlsuperuser` permissions cannot manage data whose owner is another account with `cloudsqlsuperuser` permissions.

Synchronization from an ApsaraDB RDS for PostgreSQL instance to a PolarDB for PostgreSQL instance

  • One-way synchronization

    Type

    Description

    Source database limits

    • The tables to be synchronized must have a primary key or a UNIQUE constraint, and the fields must be unique. Otherwise, duplicate data may occur in the destination database.

      Note

      If the destination table is not created by DTS (that is, the Synchronization Types is not set to Schema Synchronization), you must ensure that the table has the same primary key or non-null UNIQUE constraint as the table to be synchronized from the source database. Otherwise, duplicate data may occur in the destination database.

    • The name of the database to be synchronized cannot contain hyphens (-), for example, dts-testdata.

    • If you synchronize data at the table level and need to edit objects, such as by mapping table or column names, a single task supports a maximum of 5,000 tables. To synchronize more tables, split them into multiple tasks or configure a task to synchronize the entire database. Otherwise, a request error may occur after you submit the task.

    • Write-ahead log (WAL):

      • The WAL feature must be enabled. Set the wal_level parameter to logical.

      • For an incremental synchronization task, DTS requires that the WAL logs of the source database are retained for more than 24 hours. For a task that includes both full and incremental synchronization, DTS requires that the WAL logs are retained for at least 7 days. After the full synchronization is complete, you can change the log retention period to more than 24 hours. If the retention period is shorter than required, the DTS task may fail because it cannot obtain the WAL logs. In extreme cases, data inconsistency or data loss may occur. Issues caused by a log retention period shorter than the required period are not covered by the DTS Service-Level Agreement (SLA).

    • If the source instance has long-running transactions and the task includes incremental synchronization, the write-ahead log (WAL) that occurs before the long-running transactions are committed cannot be cleared. This may cause the WAL logs to accumulate and exhaust the disk space of the source instance.

    • Due to the limits of logical replication in the source database, if a single piece of data to be synchronized exceeds 256 MB after an incremental change, the synchronization instance may fail and cannot be recovered. You must reconfigure the synchronization instance.

    • During schema synchronization and full data synchronization, do not perform Data Definition Language (DDL) operations that change the schema of databases or tables. Otherwise, the data synchronization task fails.

      Note

      During the full synchronization phase, DTS queries the source database, which acquires metadata locks. This may block DDL operations on the source database.

    • If you perform a major engine version upgrade on the source database while the synchronization instance is running, the instance fails and cannot be recovered. You must reconfigure the synchronization instance.

    Other limits

    • A single data synchronization task can synchronize only one database. To synchronize multiple databases, you must configure a separate task for each database.

    • DTS does not support synchronizing TimescaleDB extension tables, tables with cross-schema inheritance, or tables with unique indexes based on expressions.

    • Schemas created by installing extensions are not supported. You cannot find information about these schemas in the console when you configure the task.

    • If a table to be synced contains a field of the SERIAL type, the source database automatically creates a Sequence for that field. Therefore, when you configure Source Objects, if you select Schema Synchronization for the Synchronization Types, we recommend that you also select Sequence or synchronize the entire schema. Otherwise, the synchronization instance may fail to run.

    • If your task is a full synchronization or incremental synchronization, and the tables to be synchronized in the source database contain foreign keys, triggers, or event triggers, DTS temporarily sets the session_replication_role parameter to replica at the session level during full or incremental synchronization if the destination database account is a privileged account. If the destination database account does not have this permission, you must manually set the session_replication_role parameter to replica in the destination database.During this period—when the session_replication_role parameter is set to replica during full or incremental synchronization—if the source database has cascading update or delete operations, data inconsistency may occur. After the DTS sync task is released, you can change the session_replication_role parameter back to origin.

    • In the following three scenarios, you must run the ALTER TABLE schema.table REPLICA IDENTITY FULL; command on the tables to be synchronized in the source database before you write data to them. This ensures data consistency. Do not lock the tables while running this command to prevent deadlocks. If you skip the related precheck items, DTS automatically runs this command during the initialization of the instance.

      • When the instance runs for the first time.

      • When you select Schema as the granularity for object selection, and a new table is created in the schema or a table to be synchronized is rebuilt using the RENAME command.

      • When you use the feature to modify synchronization objects.

      Note
      • In the command, replace schema and table with the actual schema name and table name.

      • We recommend that you perform this operation during off-peak hours.

    • DTS creates the following temporary tables in the source database to obtain DDL statements for incremental data, the structure of incremental tables, and heartbeat information. Do not delete these temporary tables during synchronization. Otherwise, the DTS task becomes abnormal. The temporary tables are automatically deleted after the DTS instance is released.

      public.dts_pg_class, public.dts_pg_attribute, public.dts_pg_type, public.dts_pg_enum, public.dts_postgres_heartbeat, public.dts_ddl_command, public.dts_args_session, and public.aliyun_dts_instance.

    • To ensure the accuracy of the displayed synchronization latency, DTS adds a heartbeat table named dts_postgres_heartbeat to the source database.

    • During data synchronization, DTS creates a replication slot with the dts_sync_ prefix in the source database to replicate data. This replication slot allows DTS to obtain incremental logs from the source database within the last 15 minutes. When the data synchronization fails or the synchronization instance is released, DTS attempts to automatically clear the replication slot.

      Note
      • If you change the password of the source database account used by the task or delete the DTS IP address from the whitelist of the source database during data synchronization, the replication slot cannot be automatically cleared. In this case, you must manually clear the replication slot in the source database. This prevents the slot from continuously accumulating and consuming disk space, which can make the source database unavailable.

      • If a failover occurs in the source database, you must log on to the secondary database to manually clear the slot.

      Amazon slot查询信息

    • Before you synchronize data, evaluate the performance of the source and destination databases. We recommend that you synchronize data during off-peak hours. Otherwise, initial full data synchronization consumes read and write resources on both the source and destination databases, which may increase the database load.

    • Initial full data synchronization runs concurrent INSERT operations. This causes table fragmentation in the destination database. As a result, the tablespace of the destination instance is larger than that of the source instance after the initial full data synchronization is complete.

    • For table-level data synchronization, if no data other than the data from DTS is written to the destination database, you can use Data Management (DMS) to perform online DDL operations. For more information, see Perform schema changes without locking tables.

    • During DTS synchronization, do not write data from other sources to the destination database. This can cause data inconsistency between the source and destination databases. For example, if you use DMS to perform online DDL operations while data from other sources is being written to the destination database, data loss may occur in the destination database.

    • DTS validates data content but does not support validation for metadata such as sequences. You must validate the metadata yourself.

    • After you switch your business to the destination instance, new sequences do not increment from the maximum value of the source sequence. You must update the sequence value in the destination database before the business switchover. For more information, see Update the sequence value in the destination database.

    • If the task fails, DTS technical support will attempt to recover it within 8 hours. During the recovery process, operations such as restarting the task or adjusting its parameters may be performed.

      Note

      When parameters are adjusted, only DTS task parameters are modified. Database parameters remain unchanged.The parameters that may be modified include but are not limited to those described in Modify instance parameters.

    • When you synchronize partitioned tables, you must include both the parent table and its child partitions as synchronization objects. Otherwise, data inconsistency may occur for the partitioned table.

      Note

      The parent table of a PostgreSQL partitioned table does not directly store data. All data is stored in the child partitions. The synchronization task must include the parent table and all its child partitions. Otherwise, data from the child partitions may not be synchronized, leading to data inconsistency between the source and destination.

    Special cases

    • When the source instance is an ApsaraDB RDS for PostgreSQL instance

      During synchronization, do not change the endpoint or zone of the ApsaraDB RDS for PostgreSQL instance. Otherwise, the synchronization fails.

    • When the source instance is a self-managed PostgreSQL database

      Make sure that the values of the `max_wal_senders` and `max_replication_slots` parameters are greater than the sum of the number of replication slots currently in use and the number of DTS instances to be created with this self-managed PostgreSQL database as the source.

    • When the source instance is Google Cloud Platform Cloud SQL for PostgreSQL, you must provide an account that has the `cloudsqlsuperuser` permission for the Database Account of the source database. When you select objects to sync, select objects that this account has permission to manage, or grant this account the Owner permission for the objects to be synced (for example, run the GRANT <owner_of_the_object_to_be_synced> TO <source_database_account_for_the_task> command to allow this account to perform related operations as the owner of the objects to be synced).

      Note

      An account with `cloudsqlsuperuser` permissions cannot manage data whose owner is another account with `cloudsqlsuperuser` permissions.

  • Two-way synchronization

    Type

    Description

    Source database limits

    • The tables to be synchronized must have a primary key or a UNIQUE constraint, and the fields must be unique. Otherwise, duplicate data may occur in the destination database.

      Note

      If the destination table is not created by DTS (that is, the Synchronization Types is not set to Schema Synchronization), you must ensure that the table has the same primary key or non-null UNIQUE constraint as the table to be synchronized from the source database. Otherwise, duplicate data may occur in the destination database.

    • The name of the database to be synchronized cannot contain hyphens (-), for example, dts-testdata.

    • If you synchronize data at the table level and need to edit objects, such as by mapping table or column names, a single task supports a maximum of 5,000 tables. To synchronize more tables, split them into multiple tasks or configure a task to synchronize the entire database. Otherwise, a request error may occur after you submit the task.

    • Write-ahead log (WAL):

      • The WAL feature must be enabled. Set the wal_level parameter to logical.

      • For an incremental synchronization task, DTS requires that the WAL logs of the source database are retained for more than 24 hours. For a task that includes both full and incremental synchronization, DTS requires that the WAL logs are retained for at least 7 days. After the full synchronization is complete, you can change the log retention period to more than 24 hours. If the retention period is shorter than required, the DTS task may fail because it cannot obtain the WAL logs. In extreme cases, data inconsistency or data loss may occur. Issues caused by a log retention period shorter than the required period are not covered by the DTS Service-Level Agreement (SLA).

    • If the source instance has long-running transactions and the task includes incremental synchronization, the write-ahead log (WAL) that occurs before the long-running transactions are committed cannot be cleared. This may cause the WAL logs to accumulate and exhaust the disk space of the source instance.

    • Due to the limits of logical replication in the source database, if a single piece of data to be synchronized exceeds 256 MB after an incremental change, the synchronization instance may fail and cannot be recovered. You must reconfigure the synchronization instance.

    • During schema synchronization and full data synchronization, do not perform Data Definition Language (DDL) operations that change the schema of databases or tables. Otherwise, the data synchronization task fails.

      Note

      During the full synchronization phase, DTS queries the source database, which acquires metadata locks. This may block DDL operations on the source database.

    • If you perform a major engine version upgrade on the source database while the synchronization instance is running, the instance fails and cannot be recovered. You must reconfigure the synchronization instance.

    Other limits

    • A single data synchronization task can synchronize only one database. To synchronize multiple databases, you must configure a separate task for each database.

    • DTS does not support synchronizing TimescaleDB extension tables, tables with cross-schema inheritance, or tables with unique indexes based on expressions.

    • Schemas created by installing extensions are not supported. You cannot find information about these schemas in the console when you configure the task.

    • If a table to be synced contains a field of the SERIAL type, the source database automatically creates a Sequence for that field. Therefore, when you configure Source Objects, if you select Schema Synchronization for the Synchronization Types, we recommend that you also select Sequence or synchronize the entire schema. Otherwise, the synchronization instance may fail to run.

    • If your task is a full synchronization or incremental synchronization, and the tables to be synchronized in the source database contain foreign keys, triggers, or event triggers, DTS temporarily sets the session_replication_role parameter to replica at the session level during full or incremental synchronization if the destination database account is a privileged account. If the destination database account does not have this permission, you must manually set the session_replication_role parameter to replica in the destination database.During this period—when the session_replication_role parameter is set to replica during full or incremental synchronization—if the source database has cascading update or delete operations, data inconsistency may occur. After the DTS sync task is released, you can change the session_replication_role parameter back to origin.

    • In the following three scenarios, you must run the ALTER TABLE schema.table REPLICA IDENTITY FULL; command on the tables to be synchronized in the source database before you write data to them. This ensures data consistency. Do not lock the tables while running this command to prevent deadlocks. If you skip the related precheck items, DTS automatically runs this command during the initialization of the instance.

      • When the instance runs for the first time.

      • When you select Schema as the granularity for object selection, and a new table is created in the schema or a table to be synchronized is rebuilt using the RENAME command.

      • When you use the feature to modify synchronization objects.

      Note
      • In the command, replace schema and table with the actual schema name and table name.

      • We recommend that you perform this operation during off-peak hours.

    • DTS creates the following temporary tables in the source database to obtain DDL statements for incremental data, the structure of incremental tables, and heartbeat information. Do not delete these temporary tables during synchronization. Otherwise, the DTS task becomes abnormal. The temporary tables are automatically deleted after the DTS instance is released.

      public.dts_pg_class, public.dts_pg_attribute, public.dts_pg_type, public.dts_pg_enum, public.dts_postgres_heartbeat, public.dts_ddl_command, public.dts_args_session, and public.aliyun_dts_instance.

    • To ensure the accuracy of the displayed synchronization latency, DTS adds a heartbeat table named dts_postgres_heartbeat to the source database.

    • During data synchronization, DTS creates a replication slot with the dts_sync_ prefix in the source database to replicate data. This replication slot allows DTS to obtain incremental logs from the source database within the last 15 minutes. When the data synchronization fails or the synchronization instance is released, DTS attempts to automatically clear the replication slot.

      Note
      • If you change the password of the source database account used by the task or delete the DTS IP address from the whitelist of the source database during data synchronization, the replication slot cannot be automatically cleared. In this case, you must manually clear the replication slot in the source database. This prevents the slot from continuously accumulating and consuming disk space, which can make the source database unavailable.

      • If a failover occurs in the source database, you must log on to the secondary database to manually clear the slot.

      Amazon slot查询信息

    • Before you synchronize data, evaluate the performance of the source and destination databases. We recommend that you synchronize data during off-peak hours. Otherwise, initial full data synchronization consumes read and write resources on both the source and destination databases, which may increase the database load.

    • Initial full data synchronization runs concurrent INSERT operations. This causes table fragmentation in the destination database. As a result, the tablespace of the destination instance is larger than that of the source instance after the initial full data synchronization is complete.

    • For table-level data synchronization, if no data other than the data from DTS is written to the destination database, you can use Data Management (DMS) to perform online DDL operations. For more information, see Perform schema changes without locking tables.

    • During DTS synchronization, do not write data from other sources to the destination database. This can cause data inconsistency between the source and destination databases. For example, if you use DMS to perform online DDL operations while data from other sources is being written to the destination database, data loss may occur in the destination database.

    • DTS validates data content but does not support validation for metadata such as sequences. You must validate the metadata yourself.

    • After you switch your business to the destination instance, new sequences do not increment from the maximum value of the source sequence. Before the business switchover, you must query the maximum value of the corresponding sequence in the source database and then set it as the initial value for the corresponding sequence in the destination database. The following command queries the sequence values in the source database:

      do language plpgsql $$
      declare
        nsp name;
        rel name;
        val int8;
      begin
        for nsp,rel in select nspname,relname from pg_class t2 , pg_namespace t3 where t2.relnamespace=t3.oid and t2.relkind='S'
        loop
          execute format($_$select last_value from %I.%I$_$, nsp, rel) into val;
          raise notice '%',
          format($_$select setval('%I.%I'::regclass, %s);$_$, nsp, rel, val+1);
        end loop;
      end;
      $$;
      Note

      The SQL statements that are returned after you run the command contain all sequences of the source database. Run the SQL statements in the destination database as needed.

    • When a two-way synchronization instance is running, DTS creates a schema named dts in the source and destination databases to prevent data loops. Do not modify this schema while the instance is running.

    • A two-way synchronization instance includes a forward task and a reverse task. When you configure or reset a two-way synchronization instance, if the destination object of one task is the source object of the other task:

      • Only one task can synchronize full and incremental data. The other task can only synchronize incremental data.

      • The source data of the current task can only be synchronized to the destination of the current task. The synchronized data will not be used as the source data for the other task.

    • If the task fails, DTS technical support will attempt to recover it within 8 hours. During the recovery process, operations such as restarting the task or adjusting its parameters may be performed.

      Note

      When parameters are adjusted, only DTS task parameters are modified. Database parameters remain unchanged.The parameters that may be modified include but are not limited to those described in Modify instance parameters.

    • When you synchronize partitioned tables, you must include both the parent table and its child partitions as synchronization objects. Otherwise, data inconsistency may occur for the partitioned table.

      Note

      The parent table of a PostgreSQL partitioned table does not directly store data. All data is stored in the child partitions. The synchronization task must include the parent table and all its child partitions. Otherwise, data from the child partitions may not be synchronized, leading to data inconsistency between the source and destination.

    Special cases

    • When the source instance is an ApsaraDB RDS for PostgreSQL instance

      During synchronization, do not change the endpoint or zone of the ApsaraDB RDS for PostgreSQL instance. Otherwise, the synchronization fails.

    • When the source instance is a self-managed PostgreSQL database

      Make sure that the values of the `max_wal_senders` and `max_replication_slots` parameters are greater than the sum of the number of replication slots currently in use and the number of DTS instances to be created with this self-managed PostgreSQL database as the source.

    • When the source instance is Google Cloud Platform Cloud SQL for PostgreSQL, you must provide an account that has the `cloudsqlsuperuser` permission for the Database Account of the source database. When you select objects to sync, select objects that this account has permission to manage, or grant this account the Owner permission for the objects to be synced (for example, run the GRANT <owner_of_the_object_to_be_synced> TO <source_database_account_for_the_task> command to allow this account to perform related operations as the owner of the objects to be synced).

      Note

      An account with `cloudsqlsuperuser` permissions cannot manage data whose owner is another account with `cloudsqlsuperuser` permissions.