All Products
Search
Document Center

Data Transmission Service:Limits and considerations for migrating data from a PostgreSQL source

Last Updated:Aug 25, 2025

This topic describes the limits and considerations for migrating data from a PostgreSQL source database, such as a self-managed instance or an RDS for PostgreSQL instance. Review this information before you configure a data migration task to ensure that the task runs as expected.

Overview of migration scenarios for a PostgreSQL source

Review the limits and considerations for your migration task based on the migration scenario:

Migration between PostgreSQL instances

  • Migration between RDS for PostgreSQL instances

    Type

    Description

    Source database limits

    • The tables to migrate must have a primary key or a UNIQUE constraint, and the fields must be unique. Otherwise, duplicate data may appear in the destination database.

      Note

      If the destination table that receives data is not created by DTS (the Schema Migration option for Migration Types is cleared), ensure that the destination table has the same primary key or non-null UNIQUE constraint as the source table. Otherwise, duplicate data may appear in the destination database.

      The name of the database to migrate cannot contain a hyphen (-), such as dts-testdata.

    • If you migrate objects at the table level and need to edit them, such as using column name mapping, a single data migration task supports a maximum of 1,000 tables. If you exceed this limit, an error is reported when you submit the task. In this case, split the tables into multiple migration tasks or configure a task to migrate the entire database.

    • DTS does not support the migration of temporary tables, internal triggers, or some functions (C-language functions and internal functions for PROCEDURE and FUNCTION) from the source database. DTS supports the migration of some custom data types (COMPOSITE, ENUM, or RANGE) and constraints such as primary keys, foreign keys, UNIQUE constraints, and CHECK constraints.

    • For incremental migration, Write-Ahead Logging (WAL) has the following requirements:

      • WAL must be enabled. Set the wal_level parameter to logical.

      • For an incremental migration task, DTS requires that WAL logs in the source database are retained for more than 24 hours. For a task that includes both full migration and incremental migration, DTS requires that WAL logs are retained for at least 7 days. You can change the log retention period to more than 24 hours after the full migration is complete. If the task fails because DTS cannot obtain WAL logs, or in extreme cases, if data inconsistency or loss occurs, it is not covered by the DTS Service-Level Agreement (SLA). This is because the WAL log retention period you set is shorter than the period required by DTS.

    • Source database operation limits:

      • During schema migration and full data migration, do not perform DDL operations to change the schema of the database or tables. Otherwise, the data migration task fails.

      • If you perform only full data migration, do not write new data to the source database. Otherwise, data inconsistency occurs between the source and destination databases. To maintain real-time data consistency, select schema migration, full data migration, and incremental data migration.

      • Due to the limits of logical replication in the source database, if a single piece of data to be migrated exceeds 256 MB after an incremental change, the migration instance may fail and cannot be recovered. You must reconfigure the migration instance.

    • If the source database has long-running transactions and the instance includes an incremental migration task, Write-Ahead Logging (WAL) data before the transaction commits may not be cleared. This can cause WAL data to accumulate and lead to insufficient disk space in the source database.

    • If you perform a major version upgrade on the source database while the migration instance is running, the instance fails and cannot be recovered. You must reconfigure the migration instance.

    Other limits

    • To ensure the migration task runs as expected and to prevent logical replication from being interrupted by a primary/secondary failover, your RDS for PostgreSQL instance must support and enable Logical Replication Slot Failover. For more information about how to enable this feature, see Logical Replication Slot Failover.

    • A single data migration task can migrate only one database. To migrate multiple databases, configure a separate data migration task for each database.

    • DTS does not support the migration of TimescaleDB extension tables or tables with cross-schema inheritance.

    • If a table to be migrated contains a field of the SERIAL type, the source database automatically creates a Sequence for that field. Therefore, when you configure Source Objects, if you select Schema Migration for Migration Types, select Sequence or migrate the entire schema. Otherwise, the migration instance may fail.

    • If the migration instance includes an incremental data migration task, you must run the ALTER TABLE schema.table REPLICA IDENTITY FULL; command on the tables to be migrated in the source database before writing data to them. This ensures data consistency. Do not lock the tables when you run this command. Otherwise, a deadlock may occur. If you skip the relevant check during the precheck, DTS automatically runs this command during instance initialization. This applies in the following scenarios:

      • When the instance runs for the first time.

      • When the migration object granularity is set to Schema, and a new table is created in the schema or an existing table is rebuilt using the RENAME command.

      Note
      • In the command, replace schema and table with the actual schema name and table name.

      • Run this command during off-peak hours.

    • DTS validates data content but does not support the validation of metadata such as Sequences. You must validate this metadata yourself.

    • After you switch your workloads to the destination instance, new Sequences do not increment from the maximum value of the source Sequence. You must update the Sequence value in the destination database before the switch. For more information, see Update the Sequence value in the destination database.

    • DTS creates the following temporary tables in the source database to obtain DDL statements for incremental data, the structure of incremental tables, and heartbeat information. Do not delete these temporary tables during migration. Otherwise, the DTS task becomes abnormal. The temporary tables are automatically deleted after the DTS instance is released.

      public.dts_pg_class, public.dts_pg_attribute, public.dts_pg_type, public.dts_pg_enum, public.dts_postgres_heartbeat, public.dts_ddl_command, public.dts_args_session, and public.aliyun_dts_instance.

    • If a task includes full or incremental migration and the tables to be migrated contain foreign keys, triggers, or event triggers, DTS temporarily sets the session_replication_role parameter to replica at the session level. This happens if the destination database account is a privileged account or has superuser permissions. If the account does not have these permissions, you must manually set the parameter to replica in the destination database. During this period, if cascade update or delete operations occur in the source database, data inconsistency may occur. After the DTS migration task is released, you can change the session_replication_role parameter back to origin.

    • To ensure the accuracy of the displayed latency for incremental data migration, DTS adds a heartbeat table named dts_postgres_heartbeat to the source database.

    • During incremental data migration, DTS creates a replication slot with the prefix dts_sync_ in the source database to replicate data. DTS uses this replication slot to obtain incremental logs from the source database within the last 15 minutes. When the data migration fails or the migration instance is released, DTS attempts to clean up this replication slot.

      Note
      • If you change the password of the source database account used by the task or delete the DTS IP address from the whitelist during data migration, the replication slot cannot be automatically cleaned up. In this case, you must manually clean up the replication slot in the source database. This prevents it from accumulating and consuming disk space, which could make the source database unavailable.

      • If a primary/secondary failover occurs in the source database, you must log on to the secondary database to perform the cleanup.

    • Before you migrate data, evaluate the performance of the source and destination databases. We recommend that you perform data migration during off-peak hours. Otherwise, the full data migration may consume read and write resources on both databases, which can increase the database workload.

    • Full data migration involves concurrent INSERT operations, which can cause table fragmentation in the destination database. As a result, the table storage space in the destination database will be larger than in the source database after the full migration is complete.

    • Confirm whether the migration precision for columns of the FLOAT or DOUBLE data type meets your business requirements. DTS reads the values of these columns using ROUND(COLUMN,PRECISION). If the precision is not explicitly defined, DTS uses a precision of 38 for FLOAT and 308 for DOUBLE.

    • DTS attempts to resume failed migration tasks within seven days. Therefore, before you switch your workloads to the destination instance, you must end or release the task. You can also run the revoke command to revoke the write permissions of the database account that DTS uses to access the destination instance. This prevents the source data from overwriting the data in the destination instance if the task is automatically resumed.

    • If a DTS task fails to run, DTS technical support will try to restore the task within 8 hours. During the restoration, the task may be restarted, and the parameters of the task may be modified.

      Note

      Only the parameters of the DTS task may be modified. The parameters of databases are not modified. The parameters that may be modified include but are not limited to the parameters in the "Modify instance parameters" section of the Modify the parameters of a DTS instance topic.

    Special cases

    If the source instance is an RDS for PostgreSQL instance, do not change its endpoint or zone during migration. Otherwise, the migration fails.

  • Migration from a self-managed PostgreSQL instance to an RDS for PostgreSQL instance

    Type

    Description

    Source database limits

    • Bandwidth requirement: The server that hosts the source database must have sufficient outbound bandwidth. Otherwise, the data migration speed is affected.

    • The tables to migrate must have a primary key or a UNIQUE constraint, and the fields must be unique. Otherwise, duplicate data may appear in the destination database.

      Note

      If the destination table that receives data is not created by DTS (the Schema Migration option for Migration Types is cleared), ensure that the destination table has the same primary key or non-null UNIQUE constraint as the source table. Otherwise, duplicate data may appear in the destination database.

      The name of the database to migrate cannot contain a hyphen (-), such as dts-testdata.

    • If you migrate objects at the table level and need to edit them, such as using column name mapping, a single data migration task supports a maximum of 1,000 tables. If you exceed this limit, an error is reported when you submit the task. In this case, split the tables into multiple migration tasks or configure a task to migrate the entire database.

    • DTS does not support the migration of temporary tables, internal triggers, or some functions (C-language functions and internal functions for PROCEDURE and FUNCTION) from the source database. DTS supports the migration of some custom data types (COMPOSITE, ENUM, or RANGE) and constraints such as primary keys, foreign keys, UNIQUE constraints, and CHECK constraints.

    • For incremental migration, Write-Ahead Logging (WAL) has the following requirements:

      • WAL must be enabled. Set the wal_level parameter to logical.

      • For an incremental migration task, DTS requires that WAL logs in the source database are retained for more than 24 hours. For a task that includes both full migration and incremental migration, DTS requires that WAL logs are retained for at least 7 days. You can change the log retention period to more than 24 hours after the full migration is complete. If the task fails because DTS cannot obtain WAL logs, or in extreme cases, if data inconsistency or loss occurs, it is not covered by the DTS Service-Level Agreement (SLA). This is because the WAL log retention period you set is shorter than the period required by DTS.

    • Source database operation limits:

      • If a primary/secondary failover occurs on the self-managed PostgreSQL instance, the migration fails.

      • During schema migration and full data migration, do not perform DDL operations to change the schema of the database or tables. Otherwise, the data migration task fails.

      • Due to the limits of logical replication in the source database, if a single piece of data to be migrated exceeds 256 MB after an incremental change, the migration instance may fail and cannot be recovered. You must reconfigure the migration instance.

    • If the source database has long-running transactions and the instance includes an incremental migration task, Write-Ahead Logging (WAL) data before the transaction commits may not be cleared. This can cause WAL data to accumulate and lead to insufficient disk space in the source database.

    • If you perform a major version upgrade on the source database while the migration instance is running, the instance fails and cannot be recovered. You must reconfigure the migration instance.

    Other limits

    • Latency between the primary and secondary nodes of the source database can cause data inconsistency. Use the primary node of the source database as the data source for migration.

    • A single data migration task can migrate only one database. To migrate multiple databases, configure a separate data migration task for each database.

    • DTS does not support the migration of TimescaleDB extension tables or tables with cross-schema inheritance.

    • If a table to be migrated contains a field of the SERIAL type, the source database automatically creates a Sequence for that field. Therefore, when you configure Source Objects, if you select Schema Migration for Migration Types, select Sequence or migrate the entire schema. Otherwise, the migration instance may fail.

    • If the migration instance includes an incremental data migration task, you must run the ALTER TABLE schema.table REPLICA IDENTITY FULL; command on the tables to be migrated in the source database before writing data to them. This ensures data consistency. Do not lock the tables when you run this command. Otherwise, a deadlock may occur. If you skip the relevant check during the precheck, DTS automatically runs this command during instance initialization. This applies in the following scenarios:

      • When the instance runs for the first time.

      • When the migration object granularity is set to Schema, and a new table is created in the schema or an existing table is rebuilt using the RENAME command.

      Note
      • In the command, replace schema and table with the actual schema name and table name.

      • Run this command during off-peak hours.

    • DTS validates data content but does not support the validation of metadata such as Sequences. You must validate this metadata yourself.

    • After you switch your workloads to the destination instance, new Sequences do not increment from the maximum value of the source Sequence. You must update the Sequence value in the destination database before the switch. For more information, see Update the Sequence value in the destination database.

    • DTS creates the following temporary tables in the source database to obtain DDL statements for incremental data, the structure of incremental tables, and heartbeat information. Do not delete these temporary tables during migration. Otherwise, the DTS task becomes abnormal. The temporary tables are automatically deleted after the DTS instance is released.

      public.dts_pg_class, public.dts_pg_attribute, public.dts_pg_type, public.dts_pg_enum, public.dts_postgres_heartbeat, public.dts_ddl_command, public.dts_args_session, and public.aliyun_dts_instance.

    • To ensure the accuracy of the displayed latency for incremental data migration, DTS adds a heartbeat table named dts_postgres_heartbeat to the source database.

    • During incremental data migration, DTS creates a replication slot with the prefix dts_sync_ in the source database to replicate data. DTS uses this replication slot to obtain incremental logs from the source database within the last 15 minutes. When the data migration fails or the migration instance is released, DTS attempts to clean up this replication slot.

      Note
      • If you change the password of the source database account used by the task or delete the DTS IP address from the whitelist during data migration, the replication slot cannot be automatically cleaned up. In this case, you must manually clean up the replication slot in the source database. This prevents it from accumulating and consuming disk space, which could make the source database unavailable.

      • If a primary/secondary failover occurs in the source database, you must log on to the secondary database to perform the cleanup.

    • If a task includes full or incremental migration and the tables to be migrated contain foreign keys, triggers, or event triggers, DTS temporarily sets the session_replication_role parameter to replica at the session level. This happens if the destination database account is a privileged account or has superuser permissions. If the account does not have these permissions, you must manually set the parameter to replica in the destination database. During this period, if cascade update or delete operations occur in the source database, data inconsistency may occur. After the DTS migration task is released, you can change the session_replication_role parameter back to origin.

    • Before you migrate data, evaluate the performance of the source and destination databases. We recommend that you perform data migration during off-peak hours. Otherwise, the full data migration may consume read and write resources on both databases, which can increase the database workload.

    • Full data migration involves concurrent INSERT operations, which can cause table fragmentation in the destination database. As a result, the table storage space in the destination database will be larger than in the source database after the full migration is complete.

    • Confirm whether the migration precision for columns of the FLOAT or DOUBLE data type meets your business requirements. DTS reads the values of these columns using ROUND(COLUMN,PRECISION). If the precision is not explicitly defined, DTS uses a precision of 38 for FLOAT and 308 for DOUBLE.

    • DTS attempts to resume failed migration tasks within seven days. Therefore, before you switch your workloads to the destination instance, you must end or release the task. You can also run the revoke command to revoke the write permissions of the database account that DTS uses to access the destination instance. This prevents the source data from overwriting the data in the destination instance if the task is automatically resumed.

    • If a DTS task fails to run, DTS technical support will try to restore the task within 8 hours. During the restoration, the task may be restarted, and the parameters of the task may be modified.

      Note

      Only the parameters of the DTS task may be modified. The parameters of databases are not modified. The parameters that may be modified include but are not limited to the parameters in the "Modify instance parameters" section of the Modify the parameters of a DTS instance topic.

    Special cases

    • If the source instance is a self-managed PostgreSQL instance, ensure that the values of the max_wal_senders and max_replication_slots parameters are greater than the total number of used replication slots plus the number of DTS instances to be created with this source.

    • If the source instance is a Google Cloud Platform Cloud SQL for PostgreSQL instance, enter an account with cloudsqlsuperuser permissions for Database Account. When you select migration objects, choose objects that this account has permission to manage. You can also grant the Owner permission for the objects to this account. For example, run the GRANT <owner_of_object_to_migrate> TO <source_database_account_for_task> command to allow this account to perform operations as the owner of the objects.

      Note

      An account with cloudsqlsuperuser permissions cannot manage data owned by another account with cloudsqlsuperuser permissions.

Migration from PostgreSQL to MySQL

The following are the limits and considerations:

Type

Description

Source database limits

  • Bandwidth requirement: The server that hosts the source database must have sufficient outbound bandwidth. Otherwise, the data migration speed is affected.

  • The tables to migrate must have a primary key or a UNIQUE constraint, and the fields must be unique. Otherwise, duplicate data may appear in the destination database.

    Note

    If the destination table that receives data is not created by DTS (the Schema Migration option for Migration Types is cleared), ensure that the destination table has the same primary key or non-null UNIQUE constraint as the source table. Otherwise, duplicate data may appear in the destination database.

    The name of the database to migrate cannot contain a hyphen (-), such as dts-testdata.

  • If you migrate objects at the table level and need to edit them, such as using column name mapping, a single data migration task supports a maximum of 1,000 tables. If you exceed this limit, an error is reported when you submit the task. In this case, split the tables into multiple migration tasks or configure a task to migrate the entire database.

  • For incremental migration, Write-Ahead Logging (WAL) has the following requirements:

    • WAL must be enabled. Set the wal_level parameter to logical.

    • For an incremental migration task, DTS requires that WAL logs in the source database are retained for more than 24 hours. For a task that includes both full migration and incremental migration, DTS requires that WAL logs are retained for at least 7 days. You can change the log retention period to more than 24 hours after the full migration is complete. If the task fails because DTS cannot obtain WAL logs, or in extreme cases, if data inconsistency or loss occurs, it is not covered by the DTS Service-Level Agreement (SLA). This is because the WAL log retention period you set is shorter than the period required by DTS.

  • Source database operation limits:

    • If a primary/secondary failover occurs on the self-managed PostgreSQL instance, the migration fails.

    • To ensure the migration task runs as expected and to prevent logical replication from being interrupted by a primary/secondary failover, your RDS for PostgreSQL instance must support and enable Logical Replication Slot Failover. For more information about how to enable this feature, see Logical Replication Slot Failover.

    • During full data migration, do not perform DDL operations to change the schema of the database or tables. Otherwise, the data migration task fails.

    • Due to the limits of logical replication in the source database, if a single piece of data to be migrated exceeds 256 MB after an incremental change, the migration instance may fail and cannot be recovered. You must reconfigure the migration instance.

    • If you perform only full data migration, do not write new data to the source database. Otherwise, data inconsistency occurs between the source and destination databases. To maintain real-time data consistency, select full data migration and incremental data migration.

  • If the source database has long-running transactions and the instance includes an incremental migration task, Write-Ahead Logging (WAL) data before the transaction commits may not be cleared. This can cause WAL data to accumulate and lead to insufficient disk space in the source database.

  • If you perform a major version upgrade on the source database while the migration instance is running, the instance fails and cannot be recovered. You must reconfigure the migration instance.

Other limits

  • The migration of DATATYPE, VIEW, PROCEDURE, FUNCTION, SEQUENCE, EXTENSION, OPERATOR, RULE, DEFAULT_CONSTRAINT, and TRIGGER objects is not supported.

  • If the data to be migrated contains information such as rare characters or emojis that takes up four bytes, the destination databases and tables to receive the data must use UTF8mb4 character set.

    Note

    If you use the schema migration feature of DTS, set the instance parameter character_set_server in the destination database to UTF8mb4 character set.

  • If DDL statements fail to be executed in the destination database, the DTS task continues to run. You can view the DDL statements that fail to be executed in task logs. For more information about how to view task logs, see View task logs.

  • If you write column names that differ only in capitalization to the same table in the destination MySQL database, the data migration result may not meet your expectations because the column names in MySQL databases are not case-sensitive.

  • After data migration is complete, that is, the Status of the instance changes to Completed, we recommend that you run the analyze table <table name> command to check whether data is written to the destination table. For example, if a high-availability (HA) switchover is triggered in the destination MySQL database, data may be written only to the memory. As a result, data loss occurs.

  • A single data migration task can migrate only one database. To migrate multiple databases, configure a separate data migration task for each database.

  • DTS does not support the migration of TimescaleDB extension tables or tables with cross-schema inheritance.

  • If the migration instance includes an incremental data migration task, you must run the ALTER TABLE schema.table REPLICA IDENTITY FULL; command on the tables to be migrated in the source database before writing data to them. This ensures data consistency. Do not lock the tables when you run this command. Otherwise, a deadlock may occur. If you skip the relevant check during the precheck, DTS automatically runs this command during instance initialization. This applies in the following scenarios:

    • When the instance runs for the first time.

    • When the migration object granularity is set to Schema, and a new table is created in the schema or an existing table is rebuilt using the RENAME command.

    Note
    • In the command, replace schema and table with the actual schema name and table name.

    • Run this command during off-peak hours.

  • DTS creates the following temporary tables in the source database to obtain DDL statements for incremental data (these DDL statements are not written to the destination database), the structure of incremental tables, and heartbeat information. Do not delete these temporary tables during migration. Otherwise, the DTS task becomes abnormal. The temporary tables are automatically deleted after the DTS instance is released.

    public.dts_pg_class, public.dts_pg_attribute, public.dts_pg_type, public.dts_pg_enum, public.dts_postgres_heartbeat, public.dts_ddl_command, public.dts_args_session, and public.aliyun_dts_instance.

  • To ensure the accuracy of the displayed latency for incremental data migration, DTS adds a heartbeat table named dts_postgres_heartbeat to the source database.

  • During incremental data migration, DTS creates a replication slot with the prefix dts_sync_ in the source database to replicate data. DTS uses this replication slot to obtain incremental logs from the source database within the last 15 minutes. When the data migration fails or the migration instance is released, DTS attempts to clean up this replication slot.

    Note
    • If you change the password of the source database account used by the task or delete the DTS IP address from the whitelist during data migration, the replication slot cannot be automatically cleaned up. In this case, you must manually clean up the replication slot in the source database. This prevents it from accumulating and consuming disk space, which could make the source database unavailable.

    • If a primary/secondary failover occurs in the source database, you must log on to the secondary database to perform the cleanup.

  • Before you migrate data, evaluate the performance of the source and destination databases. We recommend that you perform data migration during off-peak hours. Otherwise, the full data migration may consume read and write resources on both databases, which can increase the database workload.

  • Full data migration involves concurrent INSERT operations, which can cause table fragmentation in the destination database. As a result, the table storage space in the destination database will be larger than in the source database after the full migration is complete.

  • Confirm whether the migration precision for columns of the FLOAT or DOUBLE data type meets your business requirements. DTS reads the values of these columns using ROUND(COLUMN,PRECISION). If the precision is not explicitly defined, DTS uses a precision of 38 for FLOAT and 308 for DOUBLE.

  • DTS attempts to resume failed migration tasks within seven days. Therefore, before you switch your workloads to the destination instance, you must end or release the task. You can also run the revoke command to revoke the write permissions of the database account that DTS uses to access the destination instance. This prevents the source data from overwriting the data in the destination instance if the task is automatically resumed.

  • If a DTS task fails to run, DTS technical support will try to restore the task within 8 hours. During the restoration, the task may be restarted, and the parameters of the task may be modified.

    Note

    Only the parameters of the DTS task may be modified. The parameters of databases are not modified. The parameters that may be modified include but are not limited to the parameters in the "Modify instance parameters" section of the Modify the parameters of a DTS instance topic.

Special cases

  • If the source instance is an RDS for PostgreSQL instance, do not change its endpoint or zone during migration. Otherwise, the migration fails.

  • If the source instance is a self-managed PostgreSQL instance, ensure that the values of the max_wal_senders and max_replication_slots parameters are greater than the total number of used replication slots plus the number of DTS instances to be created with this source.

  • If the source instance is a Google Cloud Platform Cloud SQL for PostgreSQL instance, enter an account with cloudsqlsuperuser permissions for Database Account. When you select migration objects, choose objects that this account has permission to manage. You can also grant the Owner permission for the objects to this account. For example, run the GRANT <owner_of_object_to_migrate> TO <source_database_account_for_task> command to allow this account to perform operations as the owner of the objects.

    Note

    An account with cloudsqlsuperuser permissions cannot manage data owned by another account with cloudsqlsuperuser permissions.

  • If the destination instance is an RDS for MySQL instance, DTS automatically creates a database in the instance. If the name of the database to be migrated does not comply with the naming conventions of RDS for MySQL, you must create the database in the RDS for MySQL instance before you configure the migration task. For more information, see Manage databases.

Migration from PostgreSQL to PolarDB for PostgreSQL (Compatible with Oracle)

The following are the limits and considerations:

Type

Description

Source database limits

  • Bandwidth requirement: The server that hosts the source database must have sufficient outbound bandwidth. Otherwise, the data migration speed is affected.

  • The tables to migrate must have a primary key or a UNIQUE constraint, and the fields must be unique. Otherwise, duplicate data may appear in the destination database.

    Note

    If the destination table that receives data is not created by DTS (the Schema Migration option for Migration Types is cleared), ensure that the destination table has the same primary key or non-null UNIQUE constraint as the source table. Otherwise, duplicate data may appear in the destination database.

    The name of the database to migrate cannot contain a hyphen (-), such as dts-testdata.

  • If you migrate objects at the table level and need to edit them, such as using column name mapping, a single data migration task supports a maximum of 1,000 tables. If you exceed this limit, an error is reported when you submit the task. In this case, split the tables into multiple migration tasks or configure a task to migrate the entire database.

  • For incremental migration, Write-Ahead Logging (WAL) has the following requirements:

    • WAL must be enabled. Set the wal_level parameter to logical.

    • For an incremental migration task, DTS requires that WAL logs in the source database are retained for more than 24 hours. For a task that includes both full migration and incremental migration, DTS requires that WAL logs are retained for at least 7 days. You can change the log retention period to more than 24 hours after the full migration is complete. If the task fails because DTS cannot obtain WAL logs, or in extreme cases, if data inconsistency or loss occurs, it is not covered by the DTS Service-Level Agreement (SLA). This is because the WAL log retention period you set is shorter than the period required by DTS.

  • Source database operation limits:

    • If a primary/secondary failover occurs on the self-managed PostgreSQL instance, the migration fails.

    • Due to the limits of logical replication in the source database, if a single piece of data to be migrated exceeds 256 MB after an incremental change, the migration instance may fail and cannot be recovered. You must reconfigure the migration instance.

    • During full data migration, do not perform DDL operations to change the schema of the database or tables. Otherwise, the data migration task fails.

    • If you perform only full data migration, do not write new data to the source database. Otherwise, data inconsistency occurs between the source and destination databases. To maintain real-time data consistency, select full data migration and incremental data migration.

  • If the source database has long-running transactions and the instance includes an incremental migration task, Write-Ahead Logging (WAL) data before the transaction commits may not be cleared. This can cause WAL data to accumulate and lead to insufficient disk space in the source database.

  • If you perform a major version upgrade on the source database while the migration instance is running, the instance fails and cannot be recovered. You must reconfigure the migration instance.

Other limits

  • For full or incremental migration tasks, if the tables to be migrated from the source database contain foreign keys, triggers, or event triggers, and the destination database account is a privileged account, DTS temporarily sets the `session_replication_role` parameter to `replica` at the session level. If the destination database account does not have this permission, you must manually set the `session_replication_role` parameter to `replica` in the destination database. During this period, if cascade update or delete operations occur in the source database while the `session_replication_role` parameter is set to `replica`, data inconsistency may occur. After the DTS migration task is released, you can change the `session_replication_role` parameter back to `origin`.

  • If a table to be migrated contains a field of the SERIAL type, the source database automatically creates a Sequence for that field. Therefore, when you configure Source Objects, if you select Schema Migration for Migration Types, select Sequence or migrate the entire schema. Otherwise, the migration instance may fail.

  • If the migration instance includes an incremental data migration task, you must run the ALTER TABLE schema.table REPLICA IDENTITY FULL; command on the tables to be migrated in the source database before writing data to them. This ensures data consistency. Do not lock the tables when you run this command. Otherwise, a deadlock may occur. If you skip the relevant check during the precheck, DTS automatically runs this command during instance initialization. This applies in the following scenarios:

    • When the instance runs for the first time.

    • When the migration object granularity is set to Schema, and a new table is created in the schema or an existing table is rebuilt using the RENAME command.

    Note
    • In the command, replace schema and table with the actual schema name and table name.

    • Run this command during off-peak hours.

  • DTS creates the following temporary tables in the source database to obtain DDL statements for incremental data, the structure of incremental tables, and heartbeat information. Do not delete these temporary tables during migration. Otherwise, the DTS task becomes abnormal. The temporary tables are automatically deleted after the DTS instance is released.

    public.dts_pg_class, public.dts_pg_attribute, public.dts_pg_type, public.dts_pg_enum, public.dts_postgres_heartbeat, public.dts_ddl_command, public.dts_args_session, and public.aliyun_dts_instance.

  • To ensure the accuracy of the displayed latency for incremental data migration, DTS adds a heartbeat table named dts_postgres_heartbeat to the source database.

  • During incremental data migration, DTS creates a replication slot with the prefix dts_sync_ in the source database to replicate data. DTS uses this replication slot to obtain incremental logs from the source database within the last 15 minutes. When the data migration fails or the migration instance is released, DTS attempts to clean up this replication slot.

    Note
    • If you change the password of the source database account used by the task or delete the DTS IP address from the whitelist during data migration, the replication slot cannot be automatically cleaned up. In this case, you must manually clean up the replication slot in the source database. This prevents it from accumulating and consuming disk space, which could make the source database unavailable.

    • If a primary/secondary failover occurs in the source database, you must log on to the secondary database to perform the cleanup.

  • A single data migration task can migrate only one database. To migrate multiple databases, configure a separate data migration task for each database.

  • DTS does not support the migration of TimescaleDB extension tables or tables with cross-schema inheritance.

  • Before you migrate data, evaluate the performance of the source and destination databases. We recommend that you perform data migration during off-peak hours. Otherwise, the full data migration may consume read and write resources on both databases, which can increase the database workload.

  • Full data migration involves concurrent INSERT operations, which can cause table fragmentation in the destination database. As a result, the table storage space in the destination database will be larger than in the source database after the full migration is complete.

  • Confirm whether the migration precision for columns of the FLOAT or DOUBLE data type meets your business requirements. DTS reads the values of these columns using ROUND(COLUMN,PRECISION). If the precision is not explicitly defined, DTS uses a precision of 38 for FLOAT and 308 for DOUBLE.

  • DTS attempts to resume failed migration tasks within seven days. Therefore, before you switch your workloads to the destination instance, you must end or release the task. You can also run the revoke command to revoke the write permissions of the database account that DTS uses to access the destination instance. This prevents the source data from overwriting the data in the destination instance if the task is automatically resumed.

  • DTS validates data content but does not support the validation of metadata such as Sequences. You must validate this metadata yourself.

  • After you switch your workloads to the destination instance, new Sequences do not increment from the maximum value of the source Sequence. You must update the Sequence value in the destination database before the switch. For more information, see Update the Sequence value in the destination database.

  • If a DTS task fails to run, DTS technical support will try to restore the task within 8 hours. During the restoration, the task may be restarted, and the parameters of the task may be modified.

    Note

    Only the parameters of the DTS task may be modified. The parameters of databases are not modified. The parameters that may be modified include but are not limited to the parameters in the "Modify instance parameters" section of the Modify the parameters of a DTS instance topic.

Special cases

  • If the source instance is an RDS for PostgreSQL instance, do not change its endpoint or zone during migration. Otherwise, the migration fails.

  • If the source instance is a Google Cloud Platform Cloud SQL for PostgreSQL instance, enter an account with cloudsqlsuperuser permissions for Database Account. When you select migration objects, choose objects that this account has permission to manage. You can also grant the Owner permission for the objects to this account. For example, run the GRANT <owner_of_object_to_migrate> TO <source_database_account_for_task> command to allow this account to perform operations as the owner of the objects.

    Note

    An account with cloudsqlsuperuser permissions cannot manage data owned by another account with cloudsqlsuperuser permissions.

  • If the source instance is a self-managed PostgreSQL instance, ensure that the values of the max_wal_senders and max_replication_slots parameters are greater than the total number of used replication slots plus the number of DTS instances to be created with this source.