This topic describes how to migrate data from a PolarDB for MySQL cluster to a Kafka cluster by using Data Transmission Service (DTS). This helps you improve your capability of managing messages.

Prerequisites

  • The destination self-managed Kafka cluster or Message Queue for Apache Kafka instance is created.
    Note If a Message Queue for Apache Kafka instance is used as the destination instance, make sure that the instance is configured as a self-managed Kafka cluster and a topic is created to receive the data to be migrated. For information about how to create a topic, see Step 1: Create a topic.
  • The available storage space of the destination instance is larger than the total size of the data in the source PolarDB for MySQL cluster.

Limits

Note DTS does not migrate foreign keys in the source database to the destination database. Therefore, the cascade and delete operations of the source database are not migrated to the destination database.
CategoryDescription
Limits on the source database
  • The server to which the source database belongs must have sufficient outbound bandwidth. Otherwise, the data migration speed decreases.
  • The tables to be migrated must have PRIMARY KEY or UNIQUE constraints and all fields must be unique. Otherwise, the destination database may contain duplicate data records.
  • If you select tables as the objects to be migrated and you want to edit the tables (such as renaming tables or columns) in the destination database, you can migrate up to 1,000 tables in a single data migration task. If you run a task to migrate more than 1,000 tables, a request error occurs. In this case, we recommend that you configure multiple tasks to migrate the tables in batches or configure a task to migrate the entire database.
  • Data on the read-only nodes of the sourcecluster cannot be migrated.
  • If you need to migrate incremental data, the binary logging feature must be enabled and the loose_polar_log_bin parameter must be set to on. Otherwise, error messages are returned during precheck and the data migration task cannot be started. For more information about how to enable the binary logging feature and set the loose_polar_log_bin parameter, see Enable binary logging and Modify parameters.
    Note
    • If you enable the binary logging feature for a PolarDB for MySQL cluster, you are charged for the storage space that is occupied by binary logs.
    • For an incremental data migration task, the binary logs of the source database must be stored for more than 24 hours. For a full data and incremental data migration task, the binary logs of the source database must be stored for at least seven days. Otherwise, DTS may fail to obtain the binary logs and the task may fail. In exceptional circumstances, data inconsistency or loss may occur. After full data migration is complete, you can set the retention period to more than 24 hours. Make sure that you set the retention period of binary logs based on the preceding requirements. Otherwise, the Service Level Agreement (SLA) of DTS does not guarantee service reliability or performance.

  • Limits on operations to perform on the source database:
    • During schema migration and full data migration, do not perform DDL operations to change the schemas of databases or tables. Otherwise, the data migration task fails.
    • If you perform only full data migration, do not write data to the source database during data migration. Otherwise, data will be inconsistent between the source and destination databases. To ensure data consistency, we recommend that you select Schema Migration, Full Data Migration, and Incremental Data Migration as the migration types.
Other limits
  • We recommend that you do not use tools such as pt-online-schema-change to perform DDL operations on objects during data migration. Otherwise, data migration may fail.
  • DTS uses the ROUND(COLUMN,PRECISION) function to retrieve values from columns of the FLOAT or DOUBLE data type. If you do not specify a precision, DTS sets the precision for the FLOAT data type to 38 digits and the precision for the DOUBLE data type to 308 digits. You must check whether the precision settings meet your business requirements.
  • Before you migrate data, evaluate the impact of data migration on the performance of the source and destination databases. We recommend that you migrate data during off-peak hours. During full data migration, DTS uses the read and write resources of the source and destination databases. This may increase the loads of the database servers.
  • DTS attempts to resume data migration tasks that failed within the last seven days. Before you switch workloads to the destination database, stop or release the failed tasks. You can also execute the REVOKE statement to revoke the write permissions from the accounts that are used by DTS to access the destination database. Otherwise, the data in the source database will overwrite the data in the destination database after the task is resumed.
Usage notes
  • DTS executes the CREATE DATABASE IF NOT EXISTS `test` statement in the source database as scheduled to move forward the binary log file position.

  • During full data migration, concurrent INSERT operations cause fragmentation in the tables of the destination database. After full data migration is complete, the size of used tablespace of the destination database is larger than that of the source database.

Billing

Migration typeInstance configuration feeInternet traffic fee
Schema migration and full data migrationFree of charge. Charged only when data is migrated from Alibaba Cloud over the Internet. For more information, see Billing overview.
Incremental data migrationCharged. For more information, see Billing overview.

Migration types

  • Schema migration

    Data Transmission Service (DTS) migrates the schemas of objects from the source database to the destination database.

  • Full data migration

    DTS migrates the existing data of objects from the source database to the destination database.

  • Incremental data migration

    After full data migration is complete, DTS migrates incremental data from the source database to the destination database. Incremental data migration allows data to be migrated smoothly without interrupting services of self-managed applications during data migration.

SQL operations that can be incrementally migrated

Operation typeSQL statement
DMLINSERT, UPDATE, and DELETE
DDLCREATE TABLE, ALTER TABLE, DROP TABLE, RENAME TABLE, and TRUNCATE TABLE

Permissions required for database accounts

DatabaseRequired permissionReferences
PolarDB for MySQL clusterRead permissions on the objects to be migratedCreate a database account

Procedure

  1. Go to the Data Migration Tasks page.
    1. Log on to the Data Management (DMS) console.
    2. In the top navigation bar, click DTS.
    3. In the left-side navigation pane, choose DTS (DTS) > Data Migration.
    Note
  2. From the drop-down list next to Data Migration Tasks, select the region in which the data migration instance resides.
    Note If you use the new DTS console, you must select the region in which the data migration instance resides in the upper-left corner.
  3. Click Create Task. On the page that appears, configure the source and destination databases.
    SectionParameterDescription
    N/ATask Name

    The name of the task. DTS automatically generates a task name. We recommend that you specify an informative name to identify the task. You do not need to specify a unique task name.

    Source DatabaseSelect Instance
    Select whether to use an existing instance.
    • If you select an existing instance, DTS automatically applies the parameter settings of the instance. You do not need to configure the corresponding parameters again.
    • If you do not use an existing instance, you must configure parameters for the source database.
    Database TypeThe type of the source database. Select PolarDB for MySQL.
    Access MethodThe access method of the source database. Select Alibaba Cloud Instance.
    Instance RegionThe region in which the source PolarDB for MySQL instance resides.
    Replicate Data Across Alibaba Cloud AccountsSpecifies whether data is migrated across Alibaba Cloud accounts. In this example, No is selected.
    PolarDB Cluster IDThe ID of the source PolarDB for MySQL cluster.
    Database AccountThe database account of the source PolarDB for MySQL cluster. For information about the permissions that are required for the account, see the Permissions required for database accounts section of this topic.
    Database Password

    The password of the database account.

    Destination DatabaseSelect Instance
    Select whether to use an existing instance.
    • If you select an existing instance, DTS automatically applies the parameter settings of the instance. You do not need to configure the corresponding parameters again.
    • If you do not use an existing instance, you must configure parameters for the source database.
    Database TypeThe type of the destination database. Select Kafka.
    Access MethodThe access method of the destination database. In this example, Self-managed Database on ECS is selected.
    Note
    • If your destination database is a self-managed database, you must deploy the network environment for the database. For more information, see Preparation overview.
    • DTS does not allow you to directly configure the access method of a Message Queue for Apache Kafka instance. If you use a Message Queue for Apache Kafka instance as the destination instance, select Express Connect, VPN Gateway, or Smart Access Gateway and configure the instance as a self-managed Kafka cluster.
    Instance RegionThe region in which the destination Kafka cluster resides.
    ECS Instance IDThe ID of the Elastic Compute Service (ECS) instance on which the destination Kafka cluster is deployed.
    Port NumberThe service port number of the destination Kafka cluster. Default value: 9092.
    Database AccountThe username that is used to log on to the destination Kafka cluster. If no authentication is enabled for the Kafka cluster, you do not need to enter the username.
    Database PasswordThe password of the username. If no authentication is enabled for the Kafka cluster, you do not need to enter the password.
    Kafka VersionThe version of the destination Kafka cluster.
    Note If the version of the self-managed Kafka cluster is 1.0 or later, you can select Later Than 1.0.
    EncryptionSpecify whether to encrypt the connection. Select Non-encrypted or SCRAM-SHA-256 based on your business and security requirements.
    TopicSelect a topic from the drop-down list.
    Topic That Stores DDL InformationSelect a topic from the drop-down list. The topic is used to store the DDL information. If you do not set this parameter, the DDL information is stored in the topic that is specified by the Topic parameter.
    Use Kafka Schema RegistrySpecifies whether to use Kafka Schema Registry, which provides a serving layer for your metadata. It provides a RESTful API for storing and retrieving your Avro schemas. Valid values:
    • No: does not use Kafka Schema Registry.
    • Yes: uses Kafka Schema Registry. In this case, you must enter the URL or IP address that is registered in Kafka Schema Registry for your Avro schemas.
  4. In the lower part of the page, click Test Connectivity and Proceed.
    If the source or destination database is an Alibaba Cloud database instance, such as an ApsaraDB RDS for MySQL or ApsaraDB for MongoDB instance, DTS automatically adds the CIDR blocks of DTS servers to the whitelist of the instance. If the source or destination database is a self-managed database hosted on an Elastic Compute Service (ECS) instance, DTS automatically adds the CIDR blocks of DTS servers to the security group rules of the ECS instance, and you must make sure that the ECS instance can access the database. If the source or destination database is a self-managed database that is deployed in a data center or provided by a third-party cloud service provider, you must manually add the CIDR blocks of DTS servers to the whitelist of the database to allow DTS to access the database. For more information about the CIDR blocks of DTS servers, see Add the CIDR blocks of DTS servers to the security settings of on-premises databases.
    Warning If the CIDR blocks of DTS servers are automatically or manually added to the whitelist of a database or an instance, or to the security group rules of an ECS instance, security risks may arise. Therefore, before you use DTS to migrate data, you must understand and acknowledge the potential risks and take preventive measures, including but not limited to the following measures: enhance the security of your username and password, limit the ports that are exposed, authenticate API calls, regularly check the whitelist or ECS security group rules and forbid unauthorized CIDR blocks, or connect the database to DTS by using Express Connect, VPN Gateway, or Smart Access Gateway.
  5. Configure objects to migrate and advanced settings.
    ParameterDescription
    Migration Type
    • To perform only full data migration, select Schema Migration and Full Data Migration.
    • To ensure service continuity during data migration, select Schema Migration, Full Data Migration, and Incremental Data Migration.
    Note If Incremental Data Migration is not selected, we recommend that you do not write data to the source instance during data migration. This ensures data consistency between the source and destination instances.
    Processing Mode of Conflicting Tables
    • Precheck and Report Errors: checks whether the destination instance contains tables that have the same names as tables in the source database. If the source database and destination instance do not contain tables that have identical table names, the precheck is passed. Otherwise, an error is returned during the precheck and the data synchronization task cannot be started.

      Note You can use the object name mapping feature to rename the tables that are synchronized to the destination instance. If the source database and destination instance contain identical table names and the tables in the destination instance cannot be deleted or renamed, you can use this feature. For more information, see Map object names.
    • Clear Destination Table: clears data from destination tables. Proceed with caution.
    • Ignore Errors and Proceed: skips the precheck for identical table names in the source database and destination instance.
      Warning If you select Ignore Errors and Proceed, data inconsistency may occur and your business may be exposed to potential risks.
      • If the source and destination databases have the same schemas, and a data record has the same primary key value as an existing data record in the destination database:
        • During full data synchronization, DTS does not synchronize the data record to the destination database. The existing data record in the destination database is retained.
        • During incremental data synchronization, DTS synchronizes the data record to the destination database. The existing data record in the destination database is overwritten.
      • If the source and destination databases have different schemas, data may fail to be initialized. In this case, only some columns are migrated, or the data migration task fails.
    Data Format in KafkaThe format in which data records are stored in the Message Queue for Apache Kafka instance.
    • If you select DTS Avro, data is parsed based on the schema definition of DTS Avro. For more information, see GitHub.
    • If you select Canal Json, data is stored in the Canal JSON format. For more information about the related parameters and examples, see the "Canal JSON" section of the Data formats of a Kafka cluster topic.
    Policy for Shipping Data to Kafka PartitionsSelect a policy for data migration to Kafka partitions based on your business requirements. For more information, see Specify the policy for migrating data to Kafka partitions.
    Capitalization of Object Names in Destination Instance

    The capitalization of database names, table names, and column names in the destination instance. By default, DTS default policy is selected. You can select other options to make sure that the capitalization of object names is consistent with the default capitalization of object names in the source or destination database. For more information, see Specify the capitalization of object names in the destination instance.

    Source Objects

    Select one or more objects from the Source Objects section. Click the Rightwards arrow icon and add the objects to the Selected Objects section.

    Note You can select only databases as the objects to be migrated.
    Selected Objects
    • To rename an object that you want to migrate to the destination instance, right-click the object in the Selected Objects section. For more information, see Map the name of a single object.
    • To rename multiple objects at a time, click Batch Edit in the upper-right corner of the Selected Objects section. For more information, see Map multiple object names at a time.
    Note
    • If you use the object name mapping feature to rename an object, other objects that are dependent on the object may fail to be migrated.
    • To specify WHERE conditions to filter data, right-click an object in the Selected Objects section. In the dialog box that appears, specify the conditions. For more information, see Use SQL conditions to filter data.
    • To select the SQL operations performed on a specific database or table, right-click an object in the Selected Objects section. In the dialog box that appears, select the SQL operations that you want to migrate. For more information about the SQL operations that can be migrated, see SQL operations that can be incrementally migrated.
  6. Click Next: Advanced Settings.
    ParameterDescription
    Set Alerts
    Specifies whether to set alerts for the data migration task. If the task fails or the migration latency exceeds the threshold, the alert contacts will receive notifications. Valid values:
    Retry Time for Failed Connections
    The retry time range for failed connections. If the source or destination database fails to be connected after the data migration task is started, DTS immediately retries a connection within the time range. Valid values: 10 to 1440. Unit: minutes. Default value: 720. We recommend that you set the parameter to a value greater than 30. If DTS reconnects to the source and destination databases within the specified time range, DTS resumes the data migration task. Otherwise, the data migration task fails.
    Note
    • If you set different retry time ranges for multiple data migration tasks that have the same source or destination database, the shortest retry time range that is set takes precedence.
    • When DTS retries a connection, you are charged for the DTS instance. We recommend that you specify the retry time range based on your business requirements. You can also release the DTS instance at your earliest opportunity after the source and destination instances are released.
    Configure ETL
    Specifies whether to configure the extract, transform, and load (ETL) feature. For more information, see What is ETL?. Valid values:
    Whether to delete SQL operations on heartbeat tables of forward and reverse tasks
    Specifies whether to write SQL operations on heartbeat tables to the source database while the DTS instance is running.
    • Yes: does not write SQL operations on heartbeat tables. In this case, a latency of the DTS instance may be displayed.
    • No: writes SQL operations on heartbeat tables. In this case, specific features such as physical backup and cloning of the source database may be affected.
  7. In the lower part of the page, click Next: Save Task Settings and Precheck.
    Note
    • Before you can start the data migration task, DTS performs a precheck. You can start the data migration task only after the task passes the precheck.
    • If the task fails to pass the precheck, click View Details next to each failed item. After you troubleshoot the issues based on the causes, run a precheck again.
    • If an alert is triggered for an item during the precheck:
      • If an alert item cannot be ignored, click View Details next to the failed item and troubleshoot the issues. Then, run a precheck again.
      • If an alert item can be ignored, click Confirm Alert Details. In the View Details dialog box, click Ignore. In the message that appears, click OK. Then, click Precheck Again to run a precheck again. If you ignore the alert item, data inconsistency may occur, and your business may be exposed to potential risks.
  8. Wait until the Success Rate value becomes 100%. Then, click Next: Purchase Instance.
  9. On the Purchase Instance page, specify the Instance Class parameter for the data migration instance. The following table describes the parameter.
    SectionParameterDescription
    ParametersInstance Class

    DTS provides several instance classes that vary in the migration speed. You can select an instance class based on your business scenario. For more information, see Specifications of data migration instances.

  10. Read and select the check box to agree to Data Transmission Service (Pay-as-you-go) Service Terms.
  11. Click Buy and Start to start the data migration task. You can view the progress of the task in the task list.