This topic describes the precautions and limits when you migrate data from a Teradata database. To ensure that your data migration task runs as expected, read the precautions and limits before you configure the task.

Migrate data from a Teradata database to an AnalyticDB for PostgreSQL instance

The following table describes the precautions and limits.
Category Description
Limits on the source database
  • Bandwidth requirements: The server to which the source database belongs must have sufficient egress bandwidth. Otherwise, the data migration speed is affected.
  • The tables to be migrated must have PRIMARY KEY or UNIQUE constraints and all fields must be unique. Otherwise, the destination database may contain duplicate data records.
  • If you select tables as the objects to be migrated and you need to edit tables (such as renaming tables or columns), up to 1,000 tables can be migrated in a single data migration task. If you run a task to migrate more than 1,000 tables, a request error occurs. In this case, we recommend that you split the tables to be migrated, configure multiple tasks to migrate the tables, or configure a task to migrate the entire database.
  • Limits on operations:
    • During schema migration and full data migration, do not perform data definition language (DDL) operations to change the schemas of databases or tables. Otherwise, the data migration task fails.
    • In this scenario, DTS does not support incremental data migration. To ensure data consistency, we recommend that you do not write data to the source instance during data migration.
Other limits
  • You can configure a data migration task for this scenario only in the China (Shanghai), China (Qingdao), or China (Zhangjiakou) region.
  • In this scenario, DTS supports only schema migration and full data migration. DTS does not support incremental data migration.
  • Before you migrate data, evaluate the impact of data migration on the performance of the source and destination databases. We recommend that you migrate data during off-peak hours. During full data migration, DTS uses read and write resources of the source and destination databases. This may increase the loads of the database servers.
  • During full data migration, concurrent INSERT operations cause fragmentation in the tables of the destination database. After full data migration is completed, the tablespace of the destination database is larger than that of the source database.
  • You must make sure that the precision settings for columns of the FLOAT or DOUBLE data type meets your business requirements. DTS uses the ROUND(COLUMN,PRECISION) function to retrieve values from columns of the FLOAT or DOUBLE data type. If you do not specify a precision, DTS sets the precision for the FLOAT data type to 38 digits and the precision for the DOUBLE data type to 308 digits.
  • DTS attempts to resume data migration tasks that failed within the last seven days. Before you switch workloads to the destination instance, stop or release the data migration task. You can also run the revoke command to revoke the write permissions from the accounts that are used by DTS to access the destination instance. Otherwise, the data in the source database will overwrite the data in the destination instance after the task is resumed.