Type | Description |
Source database limits | Bandwidth: The source database server must have sufficient egress bandwidth. Otherwise, the data migration speed is affected. Tables to be migrated must have a primary key or a UNIQUE constraint, and the fields in the constraint must be unique. Otherwise, duplicate data may occur in the destination database. If you migrate objects at the table level and need to edit them, such as mapping column names, a single data migration task can migrate a maximum of 1,000 tables. If you exceed this limit, an error is reported when you submit the task. In this case, split the tables into smaller batches and configure multiple tasks, or configure a task to migrate the entire database. If you perform incremental migration, for the write-ahead log (WAL): Enable it. For incremental migration tasks, DTS requires the source database to retain WAL logs for more than 24 hours. For tasks that include both full and incremental migration, DTS requires the source database to retain WAL logs for at least 7 days. After the full migration is complete, you can change the log retention period to more than 24 hours. If the retention period is too short, the DTS task may fail because DTS cannot obtain the required WAL logs. In extreme cases, this can lead to data inconsistency or data loss. Issues caused by a log retention period that is shorter than the DTS requirement are not covered by the DTS Service-Level Agreement (SLA).
Source database operation limits: During the schema migration and full migration phases, do not perform DDL operations that change the database or table structure. Otherwise, the data migration task fails. If you perform only full data migration, do not write new data to the source instance. Otherwise, data inconsistency occurs between the source and destination. To maintain real-time data consistency, select schema migration, full data migration, and incremental data migration. To ensure the migration task runs properly and to prevent logical subscription interruptions caused by a primary/secondary switchover, your PolarDB for PostgreSQL (Compatible with Oracle) instance must support and have Logical Replication Slot Failover enabled. For more information, see Enable Logical Replication Slot Failover.
Note If the source PolarDB for PostgreSQL (Compatible with Oracle) cluster does not support the logical replication slot failover feature (for example, when the cluster's Database Engine is Oracle Syntax Compatible 2.0), the migration instance may fail and cannot be recovered when the source database triggers an HA failover. Due to the limits on logical subscriptions in the source database, if a single piece of data to be migrated from the source database exceeds 256 MB after an incremental change during the running of a migration instance that includes an incremental task, the migration instance may fail and cannot be recovered. You need to reconfigure the migration instance.
If the source database has long-running transactions and the instance includes an incremental migration task, the write-ahead log (WAL) before the transaction commit may accumulate and cannot be cleared. This can lead to insufficient disk space in the source database.
|
Other limits | A single data migration task can migrate only one database. To migrate multiple databases, you must configure a separate data migration task for each database. DTS does not support migrating TimescaleDB extension tables or tables with cross-schema inheritance relationships. If a table to be migrated contains a field of the SERIAL type, the source database automatically creates a Sequence for the field. Therefore, when you configure the Source Objects, if the Migration Types is set to Schema Migration, we recommend that you also select Sequence or migrate the entire schema. Otherwise, the migration instance may fail. If the migration instance includes an incremental data migration task, you must run the ALTER TABLE schema.table REPLICA IDENTITY FULL; command on the table to be migrated in the source database before you write data to it in the following two scenarios. This ensures data consistency for the table migration. During the execution of this command, we recommend that you do not perform table locking operations. Otherwise, the table will be locked. If you skip the relevant check in the precheck, DTS automatically runs this command when the instance is initialized. When the instance runs for the first time. When the migration object granularity is Schema, and a new table is created in the schema to be migrated or a table to be migrated is rebuilt using the RENAME command.
Note In the command, replace schema and table with the schema name and table name of the data to be migrated. We recommend that you perform this operation during off-peak hours.
DTS creates the following temporary tables in the source database to obtain the DDL statements of incremental data, the schemas of incremental tables, and the heartbeat information. During data migration, do not delete temporary tables in the source database. Otherwise, the data migration task may fail. After the DTS instance is released, temporary tables are automatically deleted. public.dts_pg_class, public.dts_pg_attribute, public.dts_pg_type, public.dts_pg_enum, public.dts_postgres_heartbeat, public.dts_ddl_command, public.dts_args_session, and public.aliyun_dts_instance.
To ensure the accuracy of the displayed latency for incremental data migration, DTS adds a heartbeat table named dts_postgres_heartbeat to the source database. During incremental data migration, DTS creates a replication slot with the prefix dts_sync_ in the source database to replicate data. DTS can use this replication slot to obtain incremental logs from the source database within 15 minutes.
Note DTS automatically deletes the replication slot after the instance is released. If you change the database password or delete the DTS IP address from the whitelist during migration, the replication slot cannot be automatically deleted. In this case, you must manually delete the replication slot from the source database to prevent it from accumulating and occupying disk space, which can make the ApsaraDB RDS for PostgreSQL instance unavailable. When a migration task is released or fails, DTS automatically clears the replication slot. If a primary/secondary failover occurs on the ApsaraDB RDS for PostgreSQL instance, you need to log on to the secondary database to manually clear the replication slot.
Evaluate the performance of the source and destination databases before data migration. Perform data migration during off-peak hours. During full data migration, DTS consumes some read and write resources on both databases, which can increase the database load. Because full data migration performs concurrent INSERT operations, it causes table fragmentation in the destination database. Therefore, after the full migration is complete, the table storage space in the destination database is larger than that in the source instance. Confirm that the migration precision of DTS for columns of the FLOAT or DOUBLE data type meets your business requirements. DTS reads the values of these two types of columns using ROUND(COLUMN,PRECISION). If the precision is not explicitly defined, DTS migrates FLOAT with a precision of 38 digits and DOUBLE with a precision of 308 digits. DTS attempts to resume failed migration tasks within seven days. Therefore, before you switch your business to the destination instance, you must end or release the task, or use the revoke command to revoke the write permissions of the account that DTS uses to access the destination instance. This prevents the source data from overwriting the data in the destination instance if the task is automatically resumed. DTS validates data content but does not validate metadata such as Sequences. You must validate metadata yourself. After you switch your business to the destination instance, the newly written sequence does not increment from the maximum value of the sequence in the source database. You need to update the sequence value in the destination database before the business switchover. For more information, see Update the sequence value in the destination database. For a full migration or incremental migration task, if the tables to be migrated in the source database contain foreign keys, triggers, or event triggers, and the destination database account is a privileged account or an account with superuser permissions, DTS temporarily sets the session_replication_role parameter to replica at the session level during full or incremental migration. If the destination database account does not have this permission, you must manually set the session_replication_role parameter to replica in the destination database. During this period (when the session_replication_role parameter is set to replica during full or incremental migration), if there are cascade update or delete operations in the source database, data inconsistency may occur. After the DTS migration task is released, you can change the value of the session_replication_role parameter back to origin. If an instance fails, DTS helpdesk will try to recover the instance within 8 hours. During the recovery process, operations such as restarting the instance and adjusting parameters may be performed.
Note When parameters are adjusted, only the parameters of the DTS instance are modified. The parameters of the database are not modified. The parameters that may be modified include but are not limited to those described in Modify instance parameters. When migrating partitioned tables, include both the child partitions and the parent table as synchronization objects. Otherwise, data inconsistency may occur for the partitioned table.
Note The parent table of a partitioned table in PolarDB for PostgreSQL (Compatible with Oracle) does not directly store data. All data is stored in the child partitions. The sync task must include both the parent table and all its child partitions. Otherwise, data from the child partitions may be missed, which leads to data inconsistency between the source and destination.
|