This topic describes how to migrate incremental data from a PolarDB-X V2.0 instance to a Message Queue for Apache Kafka instance in real time by using Data Transmission Service (DTS). Message Queue for Apache Kafka has an extended capability to process messages.

Prerequisites

  • The source PolarDB-X V2.0 instance is created and compatible with MySQL 5.7.
  • In the destination Message Queue for Apache Kafka instance, a topic is created to receive the migrated data. For more information, see Step 1: Create a topic.
  • The version of the source PolarDB-X instance and the destination Message Queue for Apache Kafka instance are supported. For more information, see Overview of data migration scenarios.
  • The available storage space of the destination Message Queue for Apache Kafka instance is larger than the total size of the data in the source PolarDB-X V2.0 instance.

Precautions

Billing

Migration type Task configuration fee Internet traffic fee
Schema migration and full data migration Free of charge. Charged only when data is migrated from Alibaba Cloud over the Internet. For more information, see Pricing.
Incremental data migration Charged. For more information, see Pricing.

Migration types

  • Schema migration

    DTS migrates the schemas of required objects from the source database to the destination database.

  • Full data migration

    DTS migrates historical data of required objects from the source database to the destination database.

  • Incremental data migration

    After full data migration is completed, DTS migrates incremental data from the source database to the destination database. Incremental data migration ensures service continuity when you migrate data between self-managed databases.

SQL operations that can be migrated

Operation type SQL statement
DML INSERT, UPDATE, and DELETE

Permissions required for database accounts

Database Schema migration Full data migration Incremental data migration
PolarDB-X V2.0 instance The SELECT permission The SELECT permission The SELECT permission on the objects to be migrated, the REPLICATION SLAVE permission, and the REPLICATION CLIENT permission. DTS automatically grants these permissions to the database account.
  1. Go to the Data Migration page of the new DTS console.
    Note You can also log on to the Data Management (DMS) console. In the top navigation bar, click DTS.
  2. In the upper-left corner of the page, select the region in which the data migration instance resides.
    Region
  3. Click Create Task. On the page that appears, configure the source and destination databases.
    Warning After you select the source and destination instances, we recommend that you read the limits displayed at the top of the page. This helps you create and run the data migration task.
    Section Parameter Description
    None Task Name

    DTS automatically generates a task name. We recommend that you specify an informative name to identify the task. You do not need to use a unique task name.

    Source Database Database Type Select PolarDB-X 2.0.
    Access Method Select Alibaba Cloud Instance.
    Instance Region Select the region in which the source PolarDB-X V2.0 instance resides.
    Instance ID Select the ID of the source PolarDB-X V2.0 instance.
    Database Account Enter the database account of the source PolarDB-X V2.0 instance. For information about the permissions that are required for the account, see Permissions required for database accounts.
    Database Password

    Enter the password of the database account.

    Destination Database Database Type Select Kafka.
    Access Method Select Express Connect, VPN Gateway, or Smart Access Gateway.
    Note You cannot select Message Queue for Apache Kafka as the instance type. You can use Message Queue for Apache Kafka as a self-managed Kafka database to configure data migration.
    Instance Region Select the region in which the destination Message Queue for Apache Kafka instance resides.
    Connected VPC Select the ID of the virtual private cloud (VPC) to which the destination Message Queue for Apache Kafka instance belongs. To obtain the VPC ID, you can log on to the Message Queue for Apache Kafka console and go to the Instance Details page of the Message Queue for Apache Kafka instance. On the Instance Details page, you can view the VPC ID.
    Hostname or IP Address Enter an IP address that is included in the Default Endpoint parameter of the Message Queue for Apache Kafka instance.
    Note To obtain an IP address, you can log on to the Message Queue for Apache Kafka console and go to the Instance Details page of the Message Queue for Apache Kafka instance. On the Instance Details page, you can obtain an IP address from the Default Endpoint parameter.
    Port Number Enter the service port number of the Message Queue for Apache Kafka instance. The default port number is 9092.
    Database Account Enter the database account of the destination Message Queue for Apache Kafka instance.
    Note If the instance type of the Message Queue for Apache Kafka instance is VPC Instance, you do not need to specify the database account or database password.
    Database Password

    Enter the password of the database account.

    Kafka Version Select the version of the Message Queue for Apache Kafka instance.
    Encryption Select Non-encrypted or SCRAM-SHA-256 based on your business and security requirements.
    Topic Select a topic from the drop-down list.
    Topic That Stores DDL Information Select a topic from the drop-down list. The topic is used to store the DDL information. If you do not specify this parameter, the DDL information is stored in the topic that is specified by the Topic parameter.
    Use Kafka Schema Registry Kafka Schema Registry provides a serving layer for your metadata. It provides a RESTful API for storing and retrieving your Avro schemas.
    • No: Kafka Schema Registry is not used.
    • Yes: Kafka Schema Registry is used. In this case, you must enter the URL or IP address that is registered in Kafka Schema Registry for your Avro schemas.
  4. In the lower part of the page, click Test Connectivity and Proceed.
    Warning
    • If the source or destination database instance is an Alibaba Cloud database instance, such as an ApsaraDB RDS for MySQL or ApsaraDB for MongoDB instance, or is a self-managed database hosted on Elastic Compute Service (ECS), DTS automatically adds the CIDR blocks of DTS servers to the whitelist of the database instance or ECS security group rules. For more information, see Add the CIDR blocks of DTS servers to the security settings of on-premises databases. If the source or destination database is a self-managed database on data centers or is from other cloud service providers, you must manually add the CIDR blocks of DTS servers to allow DTS to access the database.
    • If the CIDR blocks of DTS servers are automatically or manually added to the whitelist of the database instance or ECS security group rules, security risks may arise. Therefore, before you use DTS to migrate data, you must understand and acknowledge the potential risks and take preventive measures, including but not limited to the following measures: enhance the security of your account and password, limit the ports that are exposed, authenticate API calls, regularly check the whitelist or ECS security group rules and forbid unauthorized CIDR blocks, or connect the database to DTS by using Express Connect, VPN Gateway, or Smart Access Gateway.
    • After the DTS task is completed or released, we recommend that you manually detect and remove the added CIDR blocks from the whitelist of the database instance or ECS security group rules.
  5. Configure objects to migrate and advanced settings.
    • Basic Settings
      Parameter Description
      Task Stages
      • To perform only full data migration, select Schema Migration and Full Data Migration.
      • To ensure service continuity during data migration, select Schema Migration, Full Data Migration, and Incremental Data Migration.
      Note If Incremental Data Migration is not selected, we recommend that you do not write data to the source instance during data migration. This ensures data consistency between the source and destination instances.
      Processing Mode of Conflicting Tables
      • Precheck and Report Errors: checks whether the destination database contains tables that have the same names as tables in the source database. If the source and destination databases do not contain identical table names, the precheck is passed. Otherwise, an error is returned during precheck and the data migration task cannot be started.

        Note You can use the object name mapping feature to rename the tables that are migrated to the destination database. You can use this feature if the source and destination databases contain identical table names and the tables in the destination database cannot be deleted or renamed. For more information, see Map object names.
      • Ignore Errors and Proceed: skips the precheck for identical table names in the source and destination databases.
        Warning If you select Ignore Errors and Proceed, data inconsistency may occur and your business may be exposed to potential risks.
        • If the source and destination databases have the same schema, DTS does not migrate data records that have the same primary keys as data records in the destination database.
        • If the source and destination databases have different schemas, only specific columns are migrated or the data migration task fails.
      Data Format in Kafka Select the format in which data records are stored in the Message Queue for Apache Kafka instance.
      • If you select DTS Avro, data is parsed based on the schema definition of DTS Avro. For more information, visit GitHub.
      • If you select Canal Json, data is stored in the Canal JSON format. For more information about the related parameters and examples, see the "Canal JSON" section in Data formats of a Kafka cluster.
      Policy for Shipping Data to Kafka Partitions Select a migration policy based on your business requirements. For more information, see Specify the policy for migrating data to Kafka partitions.
      Select Objects

      Select one or more objects from the Source Objects section and click the Rightwards arrow icon to add the objects to the Selected Objects section.

      Rename Databases and Tables
      • To rename an object that you want to migrate to the destination instance, right-click the object in the Selected Objects section. For more information, see Map the name of a single object.
      • To rename multiple objects at a time, click Batch Edit in the upper-right corner of the Selected Objects section. For more information, see Map multiple object names at a time.
      Note If you use the object name mapping feature to rename an object, other objects that are dependent on the object may fail to be migrated.
      Filter data

      You can specify WHERE conditions to filter data. For more information, see Use SQL conditions to filter data.

      In the Selected Objects section, right-click an object. In the dialog box that appears, select the DML operations that you want to migrate. For more information, see SQL operations that can be migrated.
    • Advanced Settings
      Parameter Description
      Set Alerts
      Specify whether to set alerts for the data migration task. If the task fails or the migration latency exceeds the threshold, the alert contacts will receive notifications.
      • Select No if you do not want to set alerts.
      • Select Yes to set alerts. In this case, you must also set the alert threshold and alert contacts.
      Capitalization of Object Names in Destination Instance Specify the capitalization of database names, table names, and column names in the destination instance. By default, DTS default policy is selected. You can select other options to make sure that the capitalization of object names is consistent with that of the source or destination database. For more information, see Specify the capitalization of object names in the destination instance.
      Retry Time for Failed Connections
      Specify the retry time range for failed connections. Valid values: 10 to 720. Unit: minutes. Default value: 720. We recommend that you set the retry time range to more than 30 minutes. If DTS reconnects to the source and destination databases within the specified time range, DTS resumes the data synchronization task. Otherwise, the data synchronization task fails.
      Note
      • If an instance serves as the source or destination database of multiple data synchronization tasks, the value that is set later takes precedence.
      • When DTS retries a connection, you are charged for the DTS instance. We recommend that you specify the retry time range based on your business needs. You can also release the DTS instance at your earliest opportunity after the source and destination instances are released.
  6. Click Next: Save Task Settings and Precheck in the lower part of the page.
    Note
    • Before you can start the data migration task, DTS performs a precheck. You can start the data migration task only after the task passes the precheck.
    • If the task fails to pass the precheck, you can click the Info icon icon next to each failed item to view details.
      • You can troubleshoot the issues based on the causes and run a precheck again.
      • If you do not need to troubleshoot the issues, you can ignore failed items and run a precheck again.
  7. Wait until the Success Rate becomes 100%. Then, click Next: Purchase Instance.
  8. On the Purchase Instance page, specify the Instance Class parameter for the data migration instance. The following table describes the parameter.
    Section Parameter Description
    Parameters Instance Class

    DTS provides various migration specifications. The migration speed varies based on the migration specifications that you select based on your business requirements. For more information, see Specifications of data migration instances.

  9. Read and select Data Transmission Service (Pay-as-you-go) Service Terms.
  10. Click Buy and Start to start the data migration task. You can view the progress of the task in the task list.