When you configure a task to migrate data to a Kafka cluster, you can specify the policy for migrating data to Kafka partitions. The policy allows you to improve the migration performance. For example, you can migrate data to different partitions based on hash values.
Data Transmission Service (DTS) uses the hashCode() method in Java to calculate hash values.
In the Configure Migration Types and Objects step of a task creating wizard, you can specify the policy for migrating data to Kafka partitions. For more information, see Migrate data from a self-managed Oracle database to a Message Queue for Apache Kafka instance and Overview of data migration scenarios.
|Policy||Description||Advantage and disadvantage|
|Ship All Data to Partition 0||DTS migrates all data and DDL statements to Partition 0 of the destination topic.||
|Ship Data to Separate Partitions Based on Hash Values of Database and Table Names||DTS uses the database and table names as the partition key to calculate the hash value.
Then, DTS migrates the data and DDL statements of each table to the corresponding
partition of the destination topic.
|Ship Data to Separate Partitions Based on Hash Values of Primary Keys||DTS uses a table column as the partition key to calculate the hash value. The table
column is the primary key by default. If a table does not have a primary key, the
unique key is used as the partition key. DTS migrates each row to the corresponding
partition of the destination topic. You can specify one or more columns as partition
keys to calculate the hash value.