This topic describes how to synchronize incremental data from a PolarDB-X V2.0 instance to a Message Queue for Apache Kafka instance in real time by using Data Transmission Service (DTS).
- The source PolarDB-X V2.0 instance is created and compatible with MySQL 5.7.
- The version of the destination Message Queue for Apache Kafka instance is supported. For more information, see Overview of data synchronization scenarios.
- The available storage space of the destination Message Queue for Apache Kafka instance is larger than the total size of the data in the source PolarDB-X V2.0 instance.
- In the destination Message Queue for Apache Kafka instance, a topic is created to receive the data to synchronize. For more information, see Step 1: Create a topic.
|Limits on the source database||
SQL operations that can be synchronized
|Operation type||SQL statement|
|DML||INSERT, UPDATE, and DELETE|
- Go to the Data Synchronization page of the new DTS console.
- In the upper-left corner of the page, select the region where the data synchronization
- Click Create Task. On the page that appears, configure the source and destination databases.Warning After you select the source and destination instances, we recommend that you read the limits displayed at the top of the page. This helps you create and run the data synchronization task.
Section Parameter Description N/A Task Name
DTS automatically generates a task name. We recommend that you specify an informative name to identify the task. You do not need to use a unique task name.
Source Database Database Type Select PolarDB-X 2.0. Access Method Select Cloud Instance. Instance Region The region where the source PolarDB-X V2.0 instance resides. Instance ID The ID of the source PolarDB-X V2.0 instance. Database Account The database account of the source PolarDB-X V2.0 instance. The database account must have the SELECT permission on the objects to synchronize and the REPLICATION SLAVE and REPLICATION CLIENT permissions. DTS automatically grants these permissions to the database account. Database Password
Enter the password of the database account.
Destination Database Database Type Select Kafka. Access Method Select Express Connect, VPN Gateway, or Smart Access Gateway.Note Message Queue for Apache Kafka cannot be selected. You can consider Message Queue for Apache Kafka as a self-managed Kafka database to configure data synchronization. Instance Region The region where the destination Message Queue for Apache Kafka instance resides. Connected VPC The ID of the virtual private cloud (VPC) to which the destination Message Queue for Apache Kafka instance belongs. To obtain the VPC ID, you can log on to the Message Queue for Apache Kafka console and go to the Instance Details page of the Message Queue for Apache Kafka instance. In the Basic Information section, you can view the VPC ID. Hostname or IP Address An IP address of the Message Queue for Apache Kafka instance.Note To obtain an IP address, you can log on to the Message Queue for Apache Kafka console and go to the Instance Details page of the Message Queue for Apache Kafka instance. In the Endpoint Information section, you can obtain an IP address. Port Number The service port number of the Message Queue for Apache Kafka instance. Default value: 9092. Database Account The database account of the destination Message Queue for Apache Kafka instance.Note If the instance type of the Message Queue for Apache Kafka instance is VPC Instance, you do not need to specify the database account or database password parameter. Database Password
Enter the password of the database account.
Kafka Version The version of the Message Queue for Apache Kafka instance. Encryption Specifies whether to encrypt the connection. Select Non-encrypted or SCRAM-SHA-256 based on your business and security requirements. Topic Select a topic from the drop-down list. Topic That Stores DDL Information The topic used to store the DDL information. Select a topic from the drop-down list. If you do not specify this parameter, the DDL information is stored in the topic that is specified by the Topic parameter. Use Kafka Schema Registry Kafka Schema Registry provides a serving layer for your metadata. It provides a RESTful API operation for storing and retrieving your Avro schemas.
- No: does not use Kafka Schema Registry.
- Yes: uses Kafka Schema Registry. In this case, you must enter the URL or IP address that is registered in Kafka Schema Registry for your Avro schemas.
- In the lower part of the page, click Test Connectivity and Proceed.
- Select objects for the task and configure advanced settings.
- Basic Settings
Parameter Description Task Stages
Incremental Data Synchronization is selected by default. You must also select Schema Synchronization and Full Data Synchronization. After the precheck, DTS synchronizes the historical data of the selected objects from the source instance to the destination cluster. The historical data is the basis for subsequent incremental synchronization.
Processing Mode of Conflicting Tables
Precheck and Report Errors: checks whether the destination database contains tables that have the same names as tables in the source database. If the source and destination databases do not contain identical table names, the precheck is passed. Otherwise, an error is returned during precheck and the data synchronization task cannot be started.Note You can use the object name mapping feature to rename the tables that are synchronized to the destination database. You can use this feature if the source and destination databases contain identical table names and the tables in the destination database cannot be deleted or renamed. For more information, see Map object names.
- Ignore Errors and Proceed: skips the precheck for identical table names in the source and destination databases.
Warning If you select Ignore Errors and Proceed, data consistency is not guaranteed and your business may be exposed to potential risks.
- If the source and destination databases have the same schema, and a data record has
the same primary key as an existing data record in the destination database:
- During full data synchronization, DTS does not synchronize the data record to the destination database. The existing data record in the destination database is retained.
- During incremental data synchronization, DTS synchronizes the data record to the destination database. The existing data record in the destination database is overwritten.
- If the source and destination databases have different schemas, data may fail to be initialized. In this case, only some columns are synchronized or the data synchronization task fails.
- If the source and destination databases have the same schema, and a data record has the same primary key as an existing data record in the destination database:
Data Format in Kafka The format in which data records are stored in the Message Queue for Apache Kafka instance.
- If you select DTS Avro, data is parsed based on the schema definition of DTS Avro. For more information about the schema definition, visit GitHub.
- If you select Canal Json, data is stored in the Canal JSON format. For more information about the related parameters and examples, see the "Canal JSON" section of the Data formats of a Kafka cluster topic.
Policy for Shipping Data to Kafka Partitions The synchronization policy for data synchronized to Kafka partitions. Select a synchronization policy based on your business requirements. For more information, see Specify the policy for migrating data to Kafka partitions. Select Objects
Select one or more objects from the Source Objects section and click the icon to move the objects to the Selected Objects section.Note You can select columns, tables, or databases as the objects to be synchronized.
Rename Databases and Tables
- To rename an object that you want to synchronize to the destination instance, right-click the object in the Selected Objects section. For more information, see Map the name of a single object.
- To rename multiple objects at a time, click Batch Edit in the upper-right corner of the Selected Objects section. For more information, see Map multiple object names at a time.
You can specify WHERE conditions to filter data. For more information, see Use SQL conditions to filter data.
Select the SQL operations to be synchronized In the Selected Objects section, right-click an object. In the dialog box that appears, select the DML operations that you want to synchronize. For more information, see SQL operations that can be synchronized.
- Advanced Settings
Parameter Description Set AlertsSpecify whether to set alerts for the data synchronization task. If the task fails or the synchronization latency exceeds the threshold, the alert contacts will receive notifications.
- Select No if you do not want to set alerts.
- Select Yes to set alerts. In this case, you must also set the alert threshold and alert contacts.
Capitalization of Object Names in Destination Instance
Specify the capitalization of database names, table names, and column names in the destination instance. By default, DTS default policy is selected. You can select other options to make sure that the capitalization of object names is consistent with that of the source or destination database. For more information, see Specify the capitalization of object names in the destination instance.
Retry Time for Failed ConnectionSpecify the retry time for failed connections. Valid values: 10 to 1440. Unit: minutes. Default value: 120. We recommend that you set the retry time to more than 30 minutes. If DTS reconnects to the source and destination databases within the specified time, DTS resumes the data synchronization task. Otherwise, the data synchronization task fails.Note
- If multiple DTS instances have the same source or destination database, the lowest value takes effect. For example, the retry time is set to 30 minutes for Instance A and 60 minutes for Instance B, DTS retries failed connections at an interval of 30 minutes.
- When DTS retries a connection, you are charged for the DTS instance. We recommend that you specify the retry time based on your business needs. You can also release the DTS instance at your earliest opportunity after the source and destination instances are released.
- Basic Settings
- Click Next: Save Task Settings and Precheck in the lower part of the page. Note
- Before you can start the data synchronization task, DTS performs a precheck. You can start the data synchronization task only after the task passes the precheck.
- If the task fails to pass the precheck, you can click the icon next to each failed item to view details.
- You can troubleshoot the issues based on the causes and run a precheck again.
- If you do not need to troubleshoot the issues, you can ignore failed items and run a precheck again.
- Wait until the Success Rate becomes 100%. Then, click Next: Purchase Instance.
- On the Purchase Instance page, specify the Billing Method and Instance Class parameters for the data synchronization
instance. The following table describes the parameters.
Section Parameter Description Parameters Billing Method
- Subscription: You can pay for your subscription when you create an instance. We recommend that you select the subscription billing method for long-term use because it is more cost-effective than the pay-as-you-go billing method. You can save more costs with longer subscription periods.
- Pay-as-you-go: A pay-as-you-go instance is billed on an hourly basis. We recommend that you select the pay-as-you-go billing method for short-term use. If you no longer require your pay-as-you-go instance, you can release it to reduce costs.
Instance Class DTS provides several instance classes that have different performance in synchronization speed. You can select an instance class based on your business scenario. For more information, see Specifications of data synchronization instances. Subscription Length If you select the subscription billing method, set the subscription length and the number of instances that you want to create. The subscription length can be one to nine months or one to three years.Note This parameter is available only if you select the subscription billing method.
- Read and select Data Transmission Service (Pay-as-you-go) Service Terms.
- Click Buy and Start to start the data synchronization task. You can view the progress of the task in the task list.