All Products
Search
Document Center

Data Transmission Service:Notes and limits for migrating from a PostgreSQL source

Last Updated:Nov 20, 2025

If your source database is a PostgreSQL database, such as a self-managed PostgreSQL database or an RDS for PostgreSQL instance, review the notes and limits in this topic before you configure a data migration task. This helps ensure that the data migration task runs as expected.

Overview of migration solutions for a PostgreSQL source

Review the notes and limits for your migration task based on the migration solution:

PostgreSQL-to-PostgreSQL migration

  • Migration between RDS for PostgreSQL instances

    Type

    Description

    Source database limits

    • Tables to be migrated must have primary keys or UNIQUE constraints, and the fields must be unique. Otherwise, data duplication may occur in the destination database.

      Note

      If the destination table is not created by DTS (that is, you did not select Schema Migration for Migration Types), you must ensure that the table has the same primary key or a non-empty UNIQUE constraint as the source table. Otherwise, duplicate data may appear in the destination database.

      The name of the database to be migrated cannot contain a hyphen (-), such as dts-testdata.

    • If you migrate objects at the table level and need to edit them, for example, by mapping column names, a single data migration task supports a maximum of 1,000 tables. If you exceed this limit, an error is reported after you submit the task. In this case, split the tables into multiple migration tasks or configure a task to migrate the entire database.

    • DTS does not support the migration of temporary tables, internal triggers, or some functions (C language functions and internal functions for PROCEDURE and FUNCTION) from the source database. DTS supports the migration of some custom data (TYPE is COMPOSITE, ENUM, or RANGE). DTS supports the migration of primary keys, foreign keys, and UNIQUE and CHECK constraints.

    • For incremental migration, the write-ahead log (wal):

      • Must be enabled. Set the wal_level parameter to logical.

      • For an incremental migration task, DTS requires that WAL logs in the source database are retained for more than 24 hours. For a task that includes both full migration and incremental migration, DTS requires that WAL logs are retained for at least 7 days. You can change the log retention period to more than 24 hours after the full migration is complete. Otherwise, the task may fail because DTS cannot obtain the WAL logs. In extreme cases, this may cause data inconsistency or data loss. Issues caused by a log retention period shorter than the one required by DTS are not covered by the DTS Service-Level Agreement (SLA).

    • Source database operation limits:

      • During schema migration and full migration, do not perform DDL operations to change the schema of the database or tables. Otherwise, the data migration task fails.

      • If you perform only full data migration, do not write new data to the source database. Otherwise, data inconsistency occurs between the source and destination databases. To ensure real-time data consistency, select Schema Migration, Full Data Migration, and Incremental Data Migration.

      • Due to the limits of logical replication in the source database, if a single piece of incremental data to be migrated exceeds 256 MB during the migration, the DTS instance may fail and cannot be recovered. You must reconfigure the DTS instance.

    • If the source database has long-running transactions and the instance performs incremental migration, the write-ahead logs (WALs) before the transaction commit may not be cleared. This can cause the logs to accumulate and lead to insufficient disk space in the source database.

    • If you perform a major engine version upgrade on the source database while the DTS instance is running, the instance fails and cannot be recovered. You must reconfigure the DTS instance.

    Other limits

    • To ensure the migration task runs as expected and to prevent logical replication interruptions caused by failover, RDS for PostgreSQL must support and enable Logical Replication Slot Failover. For information about how to set this up, see Logical Replication Slot Failover.

    • A single data migration task can migrate only one database. To migrate multiple databases, you must configure a data migration task for each database.

    • DTS does not support the migration of TimescaleDB extension tables or tables with cross-schema inheritance.

    • If a table to be migrated contains a field of the SERIAL type, the source database automatically creates a Sequence for that field. Therefore, when you configure Source Objects, if you select Schema Migration for the Migration Types, we recommend that you also select Sequence or perform a full schema migration. Otherwise, the migration instance may fail.

    • If the DTS instance performs an incremental data migration task, you must run the ALTER TABLE schema.table REPLICA IDENTITY FULL; command on the tables to be migrated in the source database before you write data to them. This applies to the following two scenarios and ensures data consistency. During the execution of this command, we recommend that you do not perform table lock operations. Otherwise, the tables may be locked. If you skip the relevant check in the precheck, DTS automatically runs this command during the instance initialization.

      • When the instance runs for the first time.

      • When the migration object granularity is Schema, and a new table is created in the schema to be migrated or a table to be migrated is rebuilt using the RENAME command.

      Note
      • In the command, replace schema and table with the schema name and table name of the data to be migrated.

      • We recommend that you perform this operation during off-peak hours.

    • DTS validates data content but does not currently support the validation of metadata such as Sequences. You must validate this metadata yourself.

    • After you switch your business to the destination instance, new Sequences do not increment from the maximum value of the source Sequence. You must update the Sequence value in the destination database before the business switchover. For more information, see Update the Sequence value in the destination database.

    • DTS creates the following temporary tables in the source database to obtain DDL statements for incremental data, the structure of incremental tables, and heartbeat information. Do not delete these temporary tables during migration. Otherwise, the DTS task becomes abnormal. The temporary tables are automatically deleted after the DTS instance is released.

      public.dts_pg_class, public.dts_pg_attribute, public.dts_pg_type, public.dts_pg_enum, public.dts_postgres_heartbeat, public.dts_ddl_command, public.dts_args_session, and public.aliyun_dts_instance.

    • For a full or incremental migration task, if the tables to be migrated in the source database contain foreign keys, triggers, or event triggers, DTS temporarily sets the `session_replication_role` parameter to `replica` at the session level if the destination database account is a privileged account or has superuser permissions. If the destination database account does not have these permissions, you must manually set the `session_replication_role` parameter to `replica` in the destination database. During this period, when `session_replication_role` is `replica`, if cascade update or delete operations occur in the source database, data inconsistency may occur. After the DTS migration task is released, you can change the `session_replication_role` parameter back to `origin`.

    • To ensure the accuracy of the displayed latency for incremental data migration, DTS adds a heartbeat table named dts_postgres_heartbeat to the source database.

    • During incremental data migration, DTS creates a replication slot with the prefix dts_sync_ in the source database to replicate data. Using this replication slot, DTS can obtain incremental logs from the source database within the last 15 minutes. When the data migration fails or the migration instance is released, DTS attempts to automatically clear this replication slot.

      Note
      • If you change the password of the source database account used by the task or delete the DTS IP address from the whitelist of the source database during data migration, the replication slot cannot be automatically cleared. In this case, you must manually clear the replication slot in the source database to prevent it from accumulating and occupying disk space, which can make the source database unavailable.

      • If a failover occurs in the source database, you must log on to the secondary database to manually clear the slot.

    • Before you migrate data, evaluate the performance of the source and destination databases. We also recommend that you perform data migration during off-peak hours. Otherwise, DTS consumes read and write resources on the source and destination databases during full data migration, which may increase the database load.

    • Because full data migration involves concurrent INSERT operations, fragmentation occurs in the destination database tables. As a result, the storage space used by the tables in the destination database is larger than that in the source database after the full migration is complete.

    • Confirm that the migration precision of DTS for columns of the FLOAT or DOUBLE data type meets your business requirements. DTS reads the values of these columns using ROUND(COLUMN,PRECISION). If the precision is not explicitly defined, DTS migrates FLOAT columns with a precision of 38 digits and DOUBLE columns with a precision of 308 digits.

    • DTS attempts to resume failed migration tasks within seven days. Therefore, before you switch your business to the destination instance, you must end or release the task. Alternatively, revoke the write permissions of the account that DTS uses to access the destination instance using the revoke command. This prevents the source data from overwriting the data in the destination instance if the task is automatically resumed.

    • If an instance fails, DTS helpdesk will try to recover the instance within 8 hours. During the recovery process, operations such as restarting the instance and adjusting parameters may be performed.

      Note

      When parameters are adjusted, only the parameters of the DTS instance are modified. The parameters of the database are not modified. The parameters that may be modified include but are not limited to those described in Modify instance parameters.

    • When you migrate partitioned tables, you must include both the parent table and its child partitions as synchronization objects. Otherwise, data inconsistency may occur in the partitioned table.

      Note

      The parent table of a PostgreSQL partitioned table does not directly store data. All data is stored in the child partitions. The sync task must include the parent table and all its child partitions. Otherwise, data in the child partitions may not be synchronized, leading to data inconsistency between the source and destination.

    Special cases

    When the source instance is an RDS for PostgreSQL instance, do not change its endpoint or zone during migration. Otherwise, the migration fails.

  • Migration from a self-managed PostgreSQL database to an RDS for PostgreSQL instance

    Type

    Description

    Source database limits

    • Bandwidth requirements: The server where the source database resides must have sufficient outbound bandwidth. Otherwise, the data migration speed is affected.

    • Tables to be migrated must have primary keys or UNIQUE constraints, and the fields must be unique. Otherwise, data duplication may occur in the destination database.

      Note

      If the destination table is not created by DTS (that is, you did not select Schema Migration for Migration Types), you must ensure that the table has the same primary key or a non-empty UNIQUE constraint as the source table. Otherwise, duplicate data may appear in the destination database.

      The name of the database to be migrated cannot contain a hyphen (-), such as dts-testdata.

    • If you migrate objects at the table level and need to edit them, for example, by mapping column names, a single data migration task supports a maximum of 1,000 tables. If you exceed this limit, an error is reported after you submit the task. In this case, split the tables into multiple migration tasks or configure a task to migrate the entire database.

    • DTS does not support the migration of temporary tables, internal triggers, or some functions (C language functions and internal functions for PROCEDURE and FUNCTION) from the source database. DTS supports the migration of some custom data (TYPE is COMPOSITE, ENUM, or RANGE). DTS supports the migration of primary keys, foreign keys, and UNIQUE and CHECK constraints.

    • For incremental migration, the write-ahead log (wal):

      • Must be enabled. Set the wal_level parameter to logical.

      • For an incremental migration task, DTS requires that WAL logs in the source database are retained for more than 24 hours. For a task that includes both full migration and incremental migration, DTS requires that WAL logs are retained for at least 7 days. You can change the log retention period to more than 24 hours after the full migration is complete. Otherwise, the task may fail because DTS cannot obtain the WAL logs. In extreme cases, this may cause data inconsistency or data loss. Issues caused by a log retention period shorter than the one required by DTS are not covered by the DTS Service-Level Agreement (SLA).

    • Source database operation limits:

      • If a failover occurs in the self-managed PostgreSQL database, the migration fails.

      • During schema migration and full migration, do not perform DDL operations to change the schema of the database or tables. Otherwise, the data migration task fails.

      • Due to the limits of logical replication in the source database, if a single piece of incremental data to be migrated exceeds 256 MB during the migration, the DTS instance may fail and cannot be recovered. You must reconfigure the DTS instance.

    • If the source database has long-running transactions and the instance performs incremental migration, the write-ahead logs (WALs) before the transaction commit may not be cleared. This can cause the logs to accumulate and lead to insufficient disk space in the source database.

    • If you perform a major engine version upgrade on the source database while the DTS instance is running, the instance fails and cannot be recovered. You must reconfigure the DTS instance.

    Other limits

    • Due to potential latency between the primary and secondary nodes of the source database, which can cause data inconsistency, use the primary node of the source database as the data source for migration.

    • A single data migration task can migrate only one database. To migrate multiple databases, you must configure a data migration task for each database.

    • DTS does not support the migration of TimescaleDB extension tables or tables with cross-schema inheritance.

    • If a table to be migrated contains a field of the SERIAL type, the source database automatically creates a Sequence for that field. Therefore, when you configure Source Objects, if you select Schema Migration for the Migration Types, we recommend that you also select Sequence or perform a full schema migration. Otherwise, the migration instance may fail.

    • If the DTS instance performs an incremental data migration task, you must run the ALTER TABLE schema.table REPLICA IDENTITY FULL; command on the tables to be migrated in the source database before you write data to them. This applies to the following two scenarios and ensures data consistency. During the execution of this command, we recommend that you do not perform table lock operations. Otherwise, the tables may be locked. If you skip the relevant check in the precheck, DTS automatically runs this command during the instance initialization.

      • When the instance runs for the first time.

      • When the migration object granularity is Schema, and a new table is created in the schema to be migrated or a table to be migrated is rebuilt using the RENAME command.

      Note
      • In the command, replace schema and table with the schema name and table name of the data to be migrated.

      • We recommend that you perform this operation during off-peak hours.

    • DTS validates data content but does not currently support the validation of metadata such as Sequences. You must validate this metadata yourself.

    • After you switch your business to the destination instance, new Sequences do not increment from the maximum value of the source Sequence. You must update the Sequence value in the destination database before the business switchover. For more information, see Update the Sequence value in the destination database.

    • DTS creates the following temporary tables in the source database to obtain DDL statements for incremental data, the structure of incremental tables, and heartbeat information. Do not delete these temporary tables during migration. Otherwise, the DTS task becomes abnormal. The temporary tables are automatically deleted after the DTS instance is released.

      public.dts_pg_class, public.dts_pg_attribute, public.dts_pg_type, public.dts_pg_enum, public.dts_postgres_heartbeat, public.dts_ddl_command, public.dts_args_session, and public.aliyun_dts_instance.

    • To ensure the accuracy of the displayed latency for incremental data migration, DTS adds a heartbeat table named dts_postgres_heartbeat to the source database.

    • During incremental data migration, DTS creates a replication slot with the prefix dts_sync_ in the source database to replicate data. Using this replication slot, DTS can obtain incremental logs from the source database within the last 15 minutes. When the data migration fails or the migration instance is released, DTS attempts to automatically clear this replication slot.

      Note
      • If you change the password of the source database account used by the task or delete the DTS IP address from the whitelist of the source database during data migration, the replication slot cannot be automatically cleared. In this case, you must manually clear the replication slot in the source database to prevent it from accumulating and occupying disk space, which can make the source database unavailable.

      • If a failover occurs in the source database, you must log on to the secondary database to manually clear the slot.

    • For a full or incremental migration task, if the tables to be migrated in the source database contain foreign keys, triggers, or event triggers, DTS temporarily sets the `session_replication_role` parameter to `replica` at the session level if the destination database account is a privileged account or has superuser permissions. If the destination database account does not have these permissions, you must manually set the `session_replication_role` parameter to `replica` in the destination database. During this period, when `session_replication_role` is `replica`, if cascade update or delete operations occur in the source database, data inconsistency may occur. After the DTS migration task is released, you can change the `session_replication_role` parameter back to `origin`.

    • Before you migrate data, evaluate the performance of the source and destination databases. We also recommend that you perform data migration during off-peak hours. Otherwise, DTS consumes read and write resources on the source and destination databases during full data migration, which may increase the database load.

    • Because full data migration involves concurrent INSERT operations, fragmentation occurs in the destination database tables. As a result, the storage space used by the tables in the destination database is larger than that in the source database after the full migration is complete.

    • Confirm that the migration precision of DTS for columns of the FLOAT or DOUBLE data type meets your business requirements. DTS reads the values of these columns using ROUND(COLUMN,PRECISION). If the precision is not explicitly defined, DTS migrates FLOAT columns with a precision of 38 digits and DOUBLE columns with a precision of 308 digits.

    • DTS attempts to resume failed migration tasks within seven days. Therefore, before you switch your business to the destination instance, you must end or release the task. Alternatively, revoke the write permissions of the account that DTS uses to access the destination instance using the revoke command. This prevents the source data from overwriting the data in the destination instance if the task is automatically resumed.

    • If an instance fails, DTS helpdesk will try to recover the instance within 8 hours. During the recovery process, operations such as restarting the instance and adjusting parameters may be performed.

      Note

      When parameters are adjusted, only the parameters of the DTS instance are modified. The parameters of the database are not modified. The parameters that may be modified include but are not limited to those described in Modify instance parameters.

    • When you migrate partitioned tables, you must include both the parent table and its child partitions as synchronization objects. Otherwise, data inconsistency may occur in the partitioned table.

      Note

      The parent table of a PostgreSQL partitioned table does not directly store data. All data is stored in the child partitions. The sync task must include the parent table and all its child partitions. Otherwise, data in the child partitions may not be synchronized, leading to data inconsistency between the source and destination.

    Special cases

    • When the source instance is a self-managed PostgreSQL database, make sure that the values of the `max_wal_senders` and `max_replication_slots` parameters are both greater than the sum of the number of replication slots currently in use and the number of DTS instances to be created with this database as the source.

    • If the source instance is Google Cloud Platform Cloud SQL for PostgreSQL, you must specify an account that has the cloudsqlsuperuser permission for the Database Account of the source database. When you select migration objects, you must select objects that this account has permission to manage, or grant this account the Owner permission for the objects to be migrated (for example, by running the GRANT <owner_of_objects_to_migrate> TO <source_db_account_for_task> command to allow this account to execute related operations as the owner of the objects).

      Note

      An account with `cloudsqlsuperuser` permission cannot manage data owned by another account with `cloudsqlsuperuser` permission.

PostgreSQL-to-MySQL migration

The following are the notes and limits:

Type

Description

Source database limits

  • Bandwidth requirements: The server where the source database resides must have sufficient outbound bandwidth. Otherwise, the data migration speed is affected.

  • Tables to be migrated must have primary keys or UNIQUE constraints, and the fields must be unique. Otherwise, data duplication may occur in the destination database.

    Note

    If the destination table is not created by DTS (that is, you did not select Schema Migration for Migration Types), you must ensure that the table has the same primary key or a non-empty UNIQUE constraint as the source table. Otherwise, duplicate data may appear in the destination database.

    The name of the database to be migrated cannot contain a hyphen (-), such as dts-testdata.

  • If you migrate objects at the table level and need to edit them, for example, by mapping column names, a single data migration task supports a maximum of 1,000 tables. If you exceed this limit, an error is reported after you submit the task. In this case, split the tables into multiple migration tasks or configure a task to migrate the entire database.

  • For incremental migration, the write-ahead log (wal):

    • Must be enabled. Set the wal_level parameter to logical.

    • For an incremental migration task, DTS requires that WAL logs in the source database are retained for more than 24 hours. For a task that includes both full migration and incremental migration, DTS requires that WAL logs are retained for at least 7 days. You can change the log retention period to more than 24 hours after the full migration is complete. Otherwise, the task may fail because DTS cannot obtain the WAL logs. In extreme cases, this may cause data inconsistency or data loss. Issues caused by a log retention period shorter than the one required by DTS are not covered by the DTS Service-Level Agreement (SLA).

  • Source database operation limits:

    • If a failover occurs in the self-managed PostgreSQL database, the migration fails.

    • To ensure the migration task runs as expected and to prevent logical replication interruptions caused by failover, RDS for PostgreSQL must support and enable Logical Replication Slot Failover. For information about how to set this up, see Logical Replication Slot Failover.

    • During full migration, do not perform DDL operations to change the schema of the database or tables. Otherwise, the data migration task fails.

    • Due to the limits of logical replication in the source database, if a single piece of incremental data to be migrated exceeds 256 MB during the migration, the DTS instance may fail and cannot be recovered. You must reconfigure the DTS instance.

    • If you perform only full data migration, do not write new data to the source database. Otherwise, data inconsistency occurs between the source and destination databases. To ensure real-time data consistency, select Full Data Migration and Incremental Data Migration.

  • If the source database has long-running transactions and the instance performs incremental migration, the write-ahead logs (WALs) before the transaction commit may not be cleared. This can cause the logs to accumulate and lead to insufficient disk space in the source database.

  • If you perform a major engine version upgrade on the source database while the DTS instance is running, the instance fails and cannot be recovered. You must reconfigure the DTS instance.

Other limits

  • DTS does not support the migration of DATATYPE, VIEW, PROCEDURE, FUNCTION, SEQUENCE, EXTENSION, OPERATOR, RULE, DEFAULT_CONSTRAINT, or TRIGGER.

  • If the data to be migrated contains information such as rare characters or emojis that takes up four bytes, the destination databases and tables to receive the data must use UTF8mb4 character set.

    Note

    If you use the schema migration feature of DTS, set the instance parameter character_set_server in the destination database to UTF8mb4 character set.

  • If DDL statements fail to be executed in the destination database, the DTS task continues to run. You can view the DDL statements that fail to be executed in task logs. For more information about how to view task logs, see View task logs.

  • If you write column names that differ only in capitalization to the same table in the destination MySQL database, the data migration result may not meet your expectations because the column names in MySQL databases are not case-sensitive.

  • After data migration is complete, that is, the Status of the instance changes to Completed, we recommend that you run the analyze table <table name> command to check whether data is written to the destination table. For example, if a high-availability (HA) switchover is triggered in the destination MySQL database, data may be written only to the memory. As a result, data loss occurs.

  • A single data migration task can migrate only one database. To migrate multiple databases, you must configure a data migration task for each database.

  • DTS does not support the migration of TimescaleDB extension tables or tables with cross-schema inheritance.

  • If the DTS instance performs an incremental data migration task, you must run the ALTER TABLE schema.table REPLICA IDENTITY FULL; command on the tables to be migrated in the source database before you write data to them. This applies to the following two scenarios and ensures data consistency. During the execution of this command, we recommend that you do not perform table lock operations. Otherwise, the tables may be locked. If you skip the relevant check in the precheck, DTS automatically runs this command during the instance initialization.

    • When the instance runs for the first time.

    • When the migration object granularity is Schema, and a new table is created in the schema to be migrated or a table to be migrated is rebuilt using the RENAME command.

    Note
    • In the command, replace schema and table with the schema name and table name of the data to be migrated.

    • We recommend that you perform this operation during off-peak hours.

  • DTS creates the following temporary tables in the source database to obtain DDL statements for incremental data (these DDL statements are not written to the destination database), the structure of incremental tables, and heartbeat information. Do not delete these temporary tables during migration. Otherwise, the DTS task becomes abnormal. The temporary tables are automatically deleted after the DTS instance is released.

    public.dts_pg_class, public.dts_pg_attribute, public.dts_pg_type, public.dts_pg_enum, public.dts_postgres_heartbeat, public.dts_ddl_command, public.dts_args_session, and public.aliyun_dts_instance.

  • To ensure the accuracy of the displayed latency for incremental data migration, DTS adds a heartbeat table named dts_postgres_heartbeat to the source database.

  • During incremental data migration, DTS creates a replication slot with the prefix dts_sync_ in the source database to replicate data. Using this replication slot, DTS can obtain incremental logs from the source database within the last 15 minutes. When the data migration fails or the migration instance is released, DTS attempts to automatically clear this replication slot.

    Note
    • If you change the password of the source database account used by the task or delete the DTS IP address from the whitelist of the source database during data migration, the replication slot cannot be automatically cleared. In this case, you must manually clear the replication slot in the source database to prevent it from accumulating and occupying disk space, which can make the source database unavailable.

    • If a failover occurs in the source database, you must log on to the secondary database to manually clear the slot.

  • Before you migrate data, evaluate the performance of the source and destination databases. We also recommend that you perform data migration during off-peak hours. Otherwise, DTS consumes read and write resources on the source and destination databases during full data migration, which may increase the database load.

  • Because full data migration involves concurrent INSERT operations, fragmentation occurs in the destination database tables. As a result, the storage space used by the tables in the destination database is larger than that in the source database after the full migration is complete.

  • Confirm that the migration precision of DTS for columns of the FLOAT or DOUBLE data type meets your business requirements. DTS reads the values of these columns using ROUND(COLUMN,PRECISION). If the precision is not explicitly defined, DTS migrates FLOAT columns with a precision of 38 digits and DOUBLE columns with a precision of 308 digits.

  • DTS attempts to resume failed migration tasks within seven days. Therefore, before you switch your business to the destination instance, you must end or release the task. Alternatively, revoke the write permissions of the account that DTS uses to access the destination instance using the revoke command. This prevents the source data from overwriting the data in the destination instance if the task is automatically resumed.

  • If an instance fails, DTS helpdesk will try to recover the instance within 8 hours. During the recovery process, operations such as restarting the instance and adjusting parameters may be performed.

    Note

    When parameters are adjusted, only the parameters of the DTS instance are modified. The parameters of the database are not modified. The parameters that may be modified include but are not limited to those described in Modify instance parameters.

  • When you migrate partitioned tables, you must include both the parent table and its child partitions as synchronization objects. Otherwise, data inconsistency may occur in the partitioned table.

    Note

    The parent table of a PostgreSQL partitioned table does not directly store data. All data is stored in the child partitions. The sync task must include the parent table and all its child partitions. Otherwise, data in the child partitions may not be synchronized, leading to data inconsistency between the source and destination.

Special cases

  • When the source instance is an RDS for PostgreSQL instance, do not change its endpoint or zone during migration. Otherwise, the migration fails.

  • When the source instance is a self-managed PostgreSQL database, make sure that the values of the `max_wal_senders` and `max_replication_slots` parameters are both greater than the sum of the number of replication slots currently in use and the number of DTS instances to be created with this database as the source.

  • If the source instance is Google Cloud Platform Cloud SQL for PostgreSQL, you must specify an account that has the cloudsqlsuperuser permission for the Database Account of the source database. When you select migration objects, you must select objects that this account has permission to manage, or grant this account the Owner permission for the objects to be migrated (for example, by running the GRANT <owner_of_objects_to_migrate> TO <source_db_account_for_task> command to allow this account to execute related operations as the owner of the objects).

    Note

    An account with `cloudsqlsuperuser` permission cannot manage data owned by another account with `cloudsqlsuperuser` permission.

  • When the destination instance is an RDS for MySQL instance, DTS automatically creates a database in the instance. If the name of the database to be migrated does not comply with the naming conventions of RDS for MySQL, you must create the database in the RDS for MySQL instance before you configure the migration task. For more information, see Manage databases.

PostgreSQL to PolarDB for PostgreSQL (Compatible with Oracle) migration

The following are the notes and limits:

Type

Description

Source database limits

  • Bandwidth requirements: The server where the source database resides must have sufficient outbound bandwidth. Otherwise, the data migration speed is affected.

  • Tables to be migrated must have primary keys or UNIQUE constraints, and the fields must be unique. Otherwise, data duplication may occur in the destination database.

    Note

    If the destination table is not created by DTS (that is, you did not select Schema Migration for Migration Types), you must ensure that the table has the same primary key or a non-empty UNIQUE constraint as the source table. Otherwise, duplicate data may appear in the destination database.

    The name of the database to be migrated cannot contain a hyphen (-), such as dts-testdata.

  • If you migrate objects at the table level and need to edit them, for example, by mapping column names, a single data migration task supports a maximum of 1,000 tables. If you exceed this limit, an error is reported after you submit the task. In this case, split the tables into multiple migration tasks or configure a task to migrate the entire database.

  • For incremental migration, the write-ahead log (wal):

    • Must be enabled. Set the wal_level parameter to logical.

    • For an incremental migration task, DTS requires that WAL logs in the source database are retained for more than 24 hours. For a task that includes both full migration and incremental migration, DTS requires that WAL logs are retained for at least 7 days. You can change the log retention period to more than 24 hours after the full migration is complete. Otherwise, the task may fail because DTS cannot obtain the WAL logs. In extreme cases, this may cause data inconsistency or data loss. Issues caused by a log retention period shorter than the one required by DTS are not covered by the DTS Service-Level Agreement (SLA).

  • Source database operation limits:

    • If a failover occurs in the self-managed PostgreSQL database, the migration fails.

    • Due to the limits of logical replication in the source database, if a single piece of incremental data to be migrated exceeds 256 MB during the migration, the DTS instance may fail and cannot be recovered. You must reconfigure the DTS instance.

    • During full migration, do not perform DDL operations to change the schema of the database or tables. Otherwise, the data migration task fails.

    • If you perform only full data migration, do not write new data to the source database. Otherwise, data inconsistency occurs between the source and destination databases. To ensure real-time data consistency, select Full Data Migration and Incremental Data Migration.

  • If the source database has long-running transactions and the instance performs incremental migration, the write-ahead logs (WALs) before the transaction commit may not be cleared. This can cause the logs to accumulate and lead to insufficient disk space in the source database.

  • If you perform a major engine version upgrade on the source database while the DTS instance is running, the instance fails and cannot be recovered. You must reconfigure the DTS instance.

Other limits

  • For a full or incremental migration task, if the tables to be migrated in the source database contain foreign keys, triggers, or event triggers, DTS temporarily sets the value of the session_replication_role parameter to 'replica' at the session level if the destination database account is a privileged account. If the destination database account does not have this permission, you must manually set the value of the session_replication_role parameter to 'replica' in the destination database. During this period (while the value of the session_replication_role parameter is 'replica' during the full or incremental migration), if cascade update or delete operations occur in the source database, data inconsistency may occur. After the DTS migration task is released, you can change the value of the session_replication_role parameter back to 'origin'.

  • If a table to be migrated contains a field of the SERIAL type, the source database automatically creates a Sequence for that field. Therefore, when you configure Source Objects, if you select Schema Migration for the Migration Types, we recommend that you also select Sequence or perform a full schema migration. Otherwise, the migration instance may fail.

  • If the DTS instance performs an incremental data migration task, you must run the ALTER TABLE schema.table REPLICA IDENTITY FULL; command on the tables to be migrated in the source database before you write data to them. This applies to the following two scenarios and ensures data consistency. During the execution of this command, we recommend that you do not perform table lock operations. Otherwise, the tables may be locked. If you skip the relevant check in the precheck, DTS automatically runs this command during the instance initialization.

    • When the instance runs for the first time.

    • When the migration object granularity is Schema, and a new table is created in the schema to be migrated or a table to be migrated is rebuilt using the RENAME command.

    Note
    • In the command, replace schema and table with the schema name and table name of the data to be migrated.

    • We recommend that you perform this operation during off-peak hours.

  • DTS creates the following temporary tables in the source database to obtain DDL statements for incremental data, the structure of incremental tables, and heartbeat information. Do not delete these temporary tables during migration. Otherwise, the DTS task becomes abnormal. The temporary tables are automatically deleted after the DTS instance is released.

    public.dts_pg_class, public.dts_pg_attribute, public.dts_pg_type, public.dts_pg_enum, public.dts_postgres_heartbeat, public.dts_ddl_command, public.dts_args_session, and public.aliyun_dts_instance.

  • To ensure the accuracy of the displayed latency for incremental data migration, DTS adds a heartbeat table named dts_postgres_heartbeat to the source database.

  • During incremental data migration, DTS creates a replication slot with the prefix dts_sync_ in the source database to replicate data. Using this replication slot, DTS can obtain incremental logs from the source database within the last 15 minutes. When the data migration fails or the migration instance is released, DTS attempts to automatically clear this replication slot.

    Note
    • If you change the password of the source database account used by the task or delete the DTS IP address from the whitelist of the source database during data migration, the replication slot cannot be automatically cleared. In this case, you must manually clear the replication slot in the source database to prevent it from accumulating and occupying disk space, which can make the source database unavailable.

    • If a failover occurs in the source database, you must log on to the secondary database to manually clear the slot.

  • A single data migration task can migrate only one database. To migrate multiple databases, you must configure a data migration task for each database.

  • DTS does not support the migration of TimescaleDB extension tables or tables with cross-schema inheritance.

  • Before you migrate data, evaluate the performance of the source and destination databases. We also recommend that you perform data migration during off-peak hours. Otherwise, DTS consumes read and write resources on the source and destination databases during full data migration, which may increase the database load.

  • Because full data migration involves concurrent INSERT operations, fragmentation occurs in the destination database tables. As a result, the storage space used by the tables in the destination database is larger than that in the source database after the full migration is complete.

  • Confirm that the migration precision of DTS for columns of the FLOAT or DOUBLE data type meets your business requirements. DTS reads the values of these columns using ROUND(COLUMN,PRECISION). If the precision is not explicitly defined, DTS migrates FLOAT columns with a precision of 38 digits and DOUBLE columns with a precision of 308 digits.

  • DTS attempts to resume failed migration tasks within seven days. Therefore, before you switch your business to the destination instance, you must end or release the task. Alternatively, revoke the write permissions of the account that DTS uses to access the destination instance using the revoke command. This prevents the source data from overwriting the data in the destination instance if the task is automatically resumed.

  • DTS validates data content but does not currently support the validation of metadata such as Sequences. You must validate this metadata yourself.

  • After you switch your business to the destination instance, new Sequences do not increment from the maximum value of the source Sequence. You must update the Sequence value in the destination database before the business switchover. For more information, see Update the Sequence value in the destination database.

  • If an instance fails, DTS helpdesk will try to recover the instance within 8 hours. During the recovery process, operations such as restarting the instance and adjusting parameters may be performed.

    Note

    When parameters are adjusted, only the parameters of the DTS instance are modified. The parameters of the database are not modified. The parameters that may be modified include but are not limited to those described in Modify instance parameters.

  • When you migrate partitioned tables, you must include both the parent table and its child partitions as synchronization objects. Otherwise, data inconsistency may occur in the partitioned table.

    Note

    The parent table of a PostgreSQL partitioned table does not directly store data. All data is stored in the child partitions. The sync task must include the parent table and all its child partitions. Otherwise, data in the child partitions may not be synchronized, leading to data inconsistency between the source and destination.

Special cases

  • When the source instance is an RDS for PostgreSQL instance, do not change its endpoint or zone during migration. Otherwise, the migration fails.

  • If the source instance is Google Cloud Platform Cloud SQL for PostgreSQL, you must specify an account that has the cloudsqlsuperuser permission for the Database Account of the source database. When you select migration objects, you must select objects that this account has permission to manage, or grant this account the Owner permission for the objects to be migrated (for example, by running the GRANT <owner_of_objects_to_migrate> TO <source_db_account_for_task> command to allow this account to execute related operations as the owner of the objects).

    Note

    An account with `cloudsqlsuperuser` permission cannot manage data owned by another account with `cloudsqlsuperuser` permission.

  • When the source instance is a self-managed PostgreSQL database, make sure that the values of the `max_wal_senders` and `max_replication_slots` parameters are both greater than the sum of the number of replication slots currently in use and the number of DTS instances to be created with this database as the source.