|Limits on the source database
- The server to which the source database belongs must have sufficient outbound bandwidth.
Otherwise, the data migration speed decreases.
- The tables to be migrated must have PRIMARY KEY or UNIQUE constraints, and all fields
must be unique. Otherwise, the destination database may contain duplicate data records.
- If you select tables as the objects to be migrated and you need to modify the tables
in the destination database, such as renaming tables or columns, you can migrate up
to 1,000 tables in a single data migration task. If you run a task to migrate more
than 1,000 tables, a request error occurs. In this case, we recommend that you split
the tables and configure multiple tasks to migrate the tables, or configure a task
to migrate the entire database.
- If you want to migrate incremental data, you must make sure that the following requirements
- The binary logging feature is enabled. The value of the binlog_row_image parameter
is set to full. Otherwise, error messages are returned during precheck and the data
migration task cannot be started.
For an incremental data migration task, the binary logs of the source database must
be stored for more than 24 hours. For a full data and incremental data migration task,
the binary logs of the source database must be stored for at least seven days. After
the full data migration is complete, you can set the retention period to more than
24 hours. Otherwise, Data Transmission Service (DTS) may fail to obtain the binary
logs and the task may fail. In exceptional circumstances, data inconsistency or loss
may occur. Make sure that you set the retention period of binary logs based on the
preceding requirements. Otherwise, the Service Level Agreement (SLA) of DTS does not
ensure service reliability or performance.
- Limits on operations:
- During schema migration and full data migration, do not perform DDL operations to
change the schemas of databases or tables. Otherwise, the data migration task fails.
- If you switch the network type of the PolarDB-X V2.0 instance during data migration, you must submit a ticket to update the network connection settings of the data migration task.
- If you perform only full data migration, do not write data to the source database
during data migration. Otherwise, data inconsistency between the source and destination
databases occurs. To ensure data consistency, we recommend that you select schema
migration, full data migration, and incremental data migration as the migration types.
- The PolarDB-X V2.0 instance must be compatible with MySQL 5.7.
- Before you migrate data, evaluate the impact of data migration on the performance
of the source and destination databases. We recommend that you migrate data during
off-peak hours. During full data migration, DTS uses the read and write resources
of the source and destination databases. This may increase the loads on the database
- During full data migration, concurrent INSERT operations cause fragmentation in the
tables of the destination database. After the full data migration is complete, the
size of used tablespace of the destination database is larger than that of the source
- DTS attempts to resume data migration tasks that failed within the last seven days.
Before you switch workloads to the destination instance, stop or release the data
migration task. You can also execute the
REVOKE statement to revoke the write permissions from the accounts used by DTS to access
the destination instance. Otherwise, the data in the source database will overwrite
the data in the destination database after the task is resumed.
- DTS updates the `dts_health_check`.`ha_health_check` table in the source database as scheduled to move forward the binary log file position.
- If the destination database runs on a PolarDB for MySQL cluster, take note of the
DTS automatically creates a destination database in the PolarDB for MySQL cluster.
However, if the name of the source database is invalid, you must manually create a
database in the PolarDB for MySQL cluster before you configure the data migration
task. For more information, see Database Management.