Type | Description |
Source database limits | Bandwidth requirements: The server that hosts the source database must have sufficient outbound bandwidth. Otherwise, the data migration speed is affected. The tables to be migrated must have a primary key or a UNIQUE constraint, and the fields must be unique. Otherwise, duplicate data may exist in the destination database. If you migrate data at the table level and need to edit the tables, such as mapping column names, a single data migration task supports a maximum of 1,000 tables. If you exceed this limit, an error is reported when you submit the task. In this case, split the tables into multiple migration tasks or configure a task to migrate the entire database. If you perform incremental migration, note the following about the write-ahead log (WAL): WAL must be enabled. For an incremental migration task, DTS requires that the WAL of the source database is retained for more than 24 hours. For a task that includes both full migration and incremental migration, DTS requires that the WAL is retained for at least 7 days. You can set the log retention period to more than 24 hours after the full migration is complete. If the WAL is not retained for the required period, the DTS task may fail because DTS cannot obtain the WAL. In extreme cases, data inconsistency or data loss may occur. Issues caused by a WAL retention period shorter than the required period are not covered by the DTS Service-Level Agreement (SLA).
Limits on source database operations: During schema migration and full data migration, do not perform DDL operations to change the structure of the database or tables. Otherwise, the data migration task fails. If you perform only full data migration, do not write new data to the source instance. Otherwise, data inconsistency occurs between the source and destination databases. To maintain real-time data consistency, select schema migration, full data migration, and incremental data migration. To ensure the migration task runs as expected and to prevent logical replication from being interrupted by a failover, PolarDB for PostgreSQL (Compatible with Oracle) must support and enable Logical Replication Slot Failover.
Note If the source PolarDB for PostgreSQL (Compatible with Oracle) cluster does not support Logical Replication Slot Failover, for example, if the Database Engine is Oracle syntax compatible 2.0, a failover in the source database may cause the migration instance to fail and become unrecoverable. Due to the limits of logical replication in the source database, if a single piece of data to be migrated exceeds 256 MB after an incremental change, the migration instance may fail and cannot be recovered. You must reconfigure the migration instance.
If the source database has long-running transactions and the instance performs incremental migration, the WAL before the transaction commit may accumulate and cannot be cleared. This can lead to insufficient disk space in the source database.
|
Other limits | A single data migration task can migrate only one database. To migrate multiple databases, configure a separate data migration task for each database. DTS does not support the migration of tables created by the TimescaleDB extension, tables with cross-schema inheritance, or tables with unique indexes based on expressions. Schemas created by installing extensions are not supported for migration. You cannot retrieve information about these schemas in the console when you configure the task. If a table to be migrated contains a SERIAL column, the source database automatically creates a sequence for that column. Therefore, when you configure the Source Objects, if you select Schema Migration for Migration Types, select Sequence or migrate the entire schema. Otherwise, the migration instance may fail. If the migration instance performs incremental data migration, you must run the ALTER TABLE schema.table REPLICA IDENTITY FULL; command on the tables to be migrated in the source database before you write data to them. This ensures data consistency. This requirement applies in the following two scenarios. During the execution of this command, do not perform table lock operations to avoid deadlocks. If you skip the related check in the precheck, DTS automatically runs this command during the instance initialization. When the instance runs for the first time. When the migration object granularity is Schema, and a new table is created in the schema to be migrated or a table to be migrated is rebuilt using the RENAME command.
Note In the command, replace schema and table with the schema name and table name of the data to be migrated. We recommend that you perform this operation during off-peak hours.
DTS creates the following temporary tables in the source database to obtain DDL statements for incremental data, the structure of incremental tables, and heartbeat information. Do not delete these temporary tables during the migration. Otherwise, the DTS task becomes abnormal. The temporary tables are automatically deleted after the DTS instance is released. public.dts_pg_class, public.dts_pg_attribute, public.dts_pg_type, public.dts_pg_enum, public.dts_postgres_heartbeat, public.dts_ddl_command, public.dts_args_session, and public.aliyun_dts_instance.
To ensure the accuracy of the displayed migration latency for incremental data, DTS adds a heartbeat table named dts_postgres_heartbeat to the source database. During incremental data migration, DTS creates a replication slot with the dts_sync_ prefix in the source database to replicate data. Using this replication slot, DTS can obtain incremental logs from the source database within the last 15 minutes. When the data migration fails or the migration instance is released, DTS attempts to automatically clear this replication slot.
Note If you change the password of the source database account used by the task or delete the DTS IP address whitelist from the source database during data migration, the replication slot cannot be automatically cleared. In this case, you must manually clear the replication slot in the source database to prevent it from accumulating and occupying disk space, which can make the source database unavailable. If a failover occurs in the source database, you must log on to the secondary database to manually clear the slot.
Before migrating data, evaluate the performance of the source and destination databases. Perform data migration during off-peak hours. Otherwise, DTS consumes some read and write resources of the source and destination databases during full data migration, which may increase the database load. Because full data migration performs concurrent INSERT operations, it causes table fragmentation in the destination database. After the full migration is complete, the table storage space in the destination database is larger than that in the source instance. Confirm whether the migration precision that DTS uses for columns of the FLOAT or DOUBLE data type meets your business requirements. DTS reads the values of these columns using ROUND(COLUMN,PRECISION). If the precision is not explicitly defined, DTS migrates FLOAT with a precision of 38 digits and DOUBLE with a precision of 308 digits. DTS attempts to recover failed migration tasks within seven days. Before you switch your business to the destination instance, end or release the task, or revoke the write permissions of the account that DTS uses to access the destination instance using the revoke command. This prevents the source data from overwriting the data in the destination instance after the task is automatically recovered. DTS validates data content but does not support validation for metadata such as Sequences. You must validate the metadata yourself. After you switch your business to the destination instance, new sequences do not start incrementing from the maximum value of the source sequences. You must update the sequence values in the destination database before the switchover. For more information, see Update the sequence values in the destination database. For a full or incremental migration task, if the tables to be migrated in the source database contain foreign keys, triggers, or event triggers, DTS temporarily sets the session_replication_role parameter to replica at the session level if the destination database account is a privileged account or has superuser permissions. If the destination database account does not have these permissions, you must manually set the session_replication_role parameter to replica in the destination database. During this period (when session_replication_role is replica), if cascade update or delete operations occur in the source database, data inconsistency may occur. After the DTS migration task is released, you can change the session_replication_role parameter back to origin. If the task fails, DTS technical support will attempt to recover it within 8 hours. During the recovery process, operations such as restarting the task or adjusting its parameters may be performed.
Note When parameters are adjusted, only DTS task parameters are modified. Database parameters remain unchanged.The parameters that may be modified include but are not limited to those described in Modify instance parameters. When migrating partitioned tables, include both the parent table and its child partitions as synchronization objects. Otherwise, data inconsistency may occur for the partitioned table.
Note The parent table of a partitioned table in PolarDB for PostgreSQL (Compatible with Oracle) does not directly store data. All data is stored in the child partitions. The sync task must include the parent table and all its child partitions. Otherwise, data from the child partitions may be missed, leading to data inconsistency between the source and destination.
|