This topic describes how to synchronize incremental data from a PolarDB for MySQL cluster to a DataHub project in real time by using Data Transmission Service (DTS). After you synchronize data, you can use big data services such as Realtime Compute for Apache Flink to analyze the data in real time.

Prerequisites

Limits

Note DTS does not synchronize foreign keys in the source database to the destination database. Therefore, the cascade and delete operations of the source database are not synchronized to the destination database.
CategoryDescription
Limits on the source database
  • The tables to be synchronized must have PRIMARY KEY or UNIQUE constraints, and all fields must be unique. Otherwise, the destination may contain duplicate data records.
  • If you select tables as the objects to be synchronized and you need to edit the tables, such as renaming tables or columns, in the destination database, you can synchronize up to 1,000 tables in a single data synchronization task. If you run a task to synchronize more than 1,000 tables, a request error occurs. In this case, we recommend that you configure multiple tasks to synchronize the tables or configure a task to synchronize the entire database.
  • If you need to synchronize incremental data, the binary logging feature must be enabled and the loose_polar_log_bin parameter must be set to on. Otherwise, error messages are returned during precheck and the data synchronization task cannot be started. For more information about how to enable the binary logging feature and set the loose_polar_log_bin parameter, see Enable binary logging and Modify parameters.
    Note
    • If you enable the binary logging feature for a PolarDB for MySQL cluster, you are charged for the storage space that is occupied by binary logs.
    • For an incremental data synchronization task, the binary logs of the source database are retained for at least 24 hours. Otherwise, DTS may fail to obtain the binary logs and the task may fail. In exceptional circumstances, data inconsistency or loss may occur. Make sure that you set the retention period of binary logs based on the preceding requirements. Otherwise, the SLA of DTS does not guarantee service reliability or performance.

Other limits
  • Initial full data synchronization is not supported. DTS does not synchronize the historical data of objects from the source ApsaraDB RDS for MySQL instance to the destination DataHub instance.
  • Only tables and databases can be selected as objects to be synchronized.
  • Read-only nodes of the source PolarDB for MySQL cluster cannot be synchronized.
  • We recommend that you do not use tools such as pt-online-schema-change to perform DDL operations on source tables during data synchronization. Otherwise, data synchronization may fail.
  • If you use only DTS to write data to the destination database, you can use Data Management (DMS) to perform online DDL operations on source tables during data synchronization. For more information, see Perform lock-free operations.
  • During data synchronization, we recommend that you use only DTS to write data to the destination database. This prevents data inconsistency between the source and destination databases. If you use tools other than DTS to write data to the destination database, data loss may occur in the destination database when you use DMS to perform online DDL operations.
Usage notesDTS executes the CREATE DATABASE IF NOT EXISTS `test` statement in the source database as scheduled to move forward the binary log file position.

Billing

Synchronization typeTask configuration fee
Schema synchronization and full data synchronizationFree of charge.
Incremental data synchronizationCharged. For more information, see Billing overview.

Supported synchronization topologies

  • One-way one-to-one synchronization
  • One-way one-to-many synchronization
  • One-way many-to-one synchronization
  • One-way cascade synchronization
For more information about the synchronization topologies that are supported by DTS, see Synchronization topologies.

SQL operations that can be synchronized

INSERT, UPDATE, and DELETE

Permissions required for database accounts

The database account of the source PolarDB for MySQL cluster must have at least the read permissions on the objects to be synchronized.

Procedure

Note This procedure is described based on the new DTS console. In the event of discrepancies in operations between the DTS console and the DTS module in the Data Management (DMS) console, the DMS console takes precedence.
  1. Go to the Data Synchronization Tasks page.
    1. Log on to the Data Management (DMS) console.
    2. In the top navigation bar, click DTS.
    3. In the left-side navigation pane, choose DTS (DTS) > Data Synchronization.
    Note
  2. From the drop-down list to the right of Data Synchronization Tasks, select the region in which the data synchronization instance resides.
    Note If you use the new DTS console, you must select the region in which the data synchronization instance resides in the top navigation bar.
  3. Click Create Task. On the page that appears, configure the source and destination databases.
    SectionParameterDescription
    N/ATask Name

    DTS automatically generates a task name. We recommend that you specify an informative name to identify the task. You do not need to use a unique task name.

    Source DatabaseSelect Instance
    Select whether to use an existing instance.
    • If you select an existing instance, DTS automatically applies the parameter settings of the instance. You do not need to configure the corresponding parameters again.
    • If you do not use an existing instance, you must configure parameters for the source database.
    Database TypeThe type of the source database. Select PolarDB for MySQL.
    Access MethodThe access method of the source database. Select Alibaba Cloud Instance.
    Instance RegionThe region in which the source PolarDB for MySQL cluster resides.
    Replicate Data Across Alibaba Cloud AccountsSpecifies whether to synchronize data across Alibaba Cloud accounts. In this example, No is selected.
    PolarDB Cluster IDThe ID of the source PolarDB for MySQL cluster.
    Database AccountThe database account of the source PolarDB for MySQL cluster. For more information about the permissions that are required for the account, see the Permissions required for database accounts section of this topic.
    Database Password

    The password of the database account.

    Destination DatabaseSelect Instance
    Select whether to use an existing instance.
    • If you select an existing instance, DTS automatically applies the parameter settings of the instance. You do not need to configure the corresponding parameters again.
    • If you do not use an existing instance, you must configure parameters for the source database.
    Database TypeThe type of the destination database. Select DataHub.
    Access MethodThe access method of the destination database. Select Alibaba Cloud Instance.
    Instance RegionThe region in which the destination DataHub project resides.
    ProjectThe destination DataHub project to which data is to be synchronized.
  4. In the lower part of the page, click Test Connectivity and Proceed.
    Warning
    • If the source or destination database is an Alibaba Cloud database instance, such as an ApsaraDB RDS for MySQL or ApsaraDB for MongoDB, DTS automatically adds the CIDR blocks of DTS servers to the whitelist of the instance. For more information, see Add the CIDR blocks of DTS servers to the security settings of on-premises databases. If the source or destination database is a self-managed database hosted on an Elastic Compute Service (ECS) instance, DTS automatically adds the CIDR blocks of DTS servers to the security group rules of the ECS instance, and you need to manually add the CIDR blocks of DTS servers to the whitelist of the self-managed database on the ECS instance to allow DTS to access the database. If the source or destination database is a self-managed database that is deployed in a data center or provided by a third-party cloud service provider, you must manually add the CIDR blocks of DTS servers to the whitelist of the database to allow DTS to access the database.
    • If the CIDR blocks of DTS servers are automatically or manually added to the whitelist or ECS security group rules, security risks may arise. Therefore, before you use DTS to synchronize data, you must understand and acknowledge the potential risks and take preventive measures, including but not limited to the following measures: enhancing the security of your username and password, limiting the ports that are exposed, authenticating API calls, regularly checking the whitelist or ECS security group rules and forbidding unauthorized CIDR blocks, or connecting the database to DTS by using Express Connect, VPN Gateway, or Smart Access Gateway.
    • After the DTS task is complete or released, we recommend that you manually remove the CIDR blocks of DTS servers from the whitelist or ECS security group rules. You must remove the IP address whitelist group whose name contains dts from the whitelist of the Alibaba Cloud database instance or the security group rules of the ECS instance. For more information about the CIDR blocks that you must remove from the whitelist of the self-managed databases that are deployed in data centers or databases that are hosted on third-party cloud services, see Add the CIDR blocks of DTS servers to the security settings of on-premises databases.
  5. Select objects for the task and configure advanced settings.
    ParameterDescription
    Task Stages
    By default, Incremental Data Synchronization is selected. You can select only Schema Synchronization. You cannot select Full Data Synchronization.
    Note During initial schema synchronization, DTS synchronizes the schemas of the selected objects such as tables from the source database to the destination DataHub instance.
    Processing Mode of Conflicting Tables
    • Precheck and Report Errors: checks whether the destination database contains tables that have the same names as tables in the source database. If the source and destination databases do not contain tables that have identical table names, the precheck is passed. Otherwise, an error is returned during the precheck, and the data synchronization task cannot be started.

      Note You can use the object name mapping feature to rename the tables that are synchronized to the destination database. You can use this feature if the source and destination databases contain identical table names and the tables in the destination database cannot be deleted or renamed. For more information, see Map object names.
    • Ignore Errors and Proceed: skips the precheck for identical table names in the source and destination databases.
      Warning If you select Ignore Errors and Proceed, data inconsistency may occur, and your business may be exposed to potential risks.
      • If the source and destination databases have the same schemas, and a data record has the same primary key value as an existing data record in the destination database:
        • During full data synchronization, DTS does not synchronize the data record to the destination database. The existing data record in the destination database is retained.
        • During incremental data synchronization, DTS synchronizes the data record to the destination database. The existing data record in the destination database is overwritten.
      • If the source and destination databases have different schemas, data may fail to be initialized. In this case, only some columns are synchronized or the data synchronization task fails.
    Naming Rules of Additional Columns
    When DTS synchronizes data to DataHub, DTS adds additional columns to the destination topic. If the names of additional columns are the same as the names of existing columns in the destination topic, the data synchronization task fails. You can select Yes or No to specify whether to enable the new naming rules for additional columns based on your business requirements.
    Warning Before you set this parameter to Yes or No, check whether additional columns and existing columns in the destination topic have name conflicts. For more information, see Modify the naming rules for additional columns.
    Capitalization of Object Names in Destination Instance

    The capitalization of database names, table names, and column names in the destination instance. By default, DTS default policy is selected. You can select other options to make sure that the capitalization of object names is consistent with that in the source or destination database. For more information, see Specify the capitalization of object names in the destination instance.

    Source ObjectsSelect one or more objects from the Source Objects section and click the Rightwards arrow icon to add the objects to the Selected Objects section.
    Note You can select tables or databases as the objects to be synchronized.
    Selected Objects
    • To rename an object that you want to synchronize to the destination instance, right-click the object in the Selected Objects section. For more information, see Map the name of a single object.
    • To rename multiple objects at a time, click Batch Edit in the upper-right corner of the Selected Objects section. For more information, see Map multiple object names at a time.
  6. Click Next: Advanced Settings to configure advanced settings.
    ParameterDescription
    Set Alerts
    Specifies whether to configure alerting for the data synchronization task. If the task fails or the synchronization latency exceeds the specified threshold, alert contacts will receive notifications. Valid values:
    Retry Time for Failed Connection
    The retry time range for failed connections. If the source or destination database fails to be connected after the data synchronization task is started, DTS immediately retries a connection within the time range. Valid values: 10 to 1440. Unit: minutes. Default value: 720. We recommend that you set the parameter to a value greater than 30. If DTS reconnects to the source and destination databases within the specified time range, DTS resumes the data synchronization task. Otherwise, the data synchronization task fails.
    Note
    • If you set different retry time ranges for multiple DTS tasks that have the same source or destination database, the shortest retry time range that is set takes precedence.
    • When DTS retries a connection, you are charged for the DTS instance. We recommend that you specify the retry time range based on your business requirements. You can also release the DTS instance at the earliest opportunity after the source and destination instances are released.
    Configure ETLSpecifies whether to enable the extract, transform, and load (ETL) feature. Select Yes or No based on your business requirements. If you select Yes, you must enter domain-specific language (DSL) statements in the code editor. For more information, see Configure ETL in a data migration or data synchronization task.
    Whether to delete SQL operations on heartbeat tables of forward and reverse tasks
    Specifies whether to write SQL operations on heartbeat tables to the source database while the DTS instance is running.
    • Yes: does not write SQL operations on heartbeat tables. In this case, a latency of the DTS instance may be displayed.
    • No: writes SQL operations on heartbeat tables. In this case, specific features such as physical backup and cloning of the source database may be affected.
  7. Optional: In the Selected Objects section, move the pointer over the name of a topic to be synchronized and right-click it. In the dialog box that appears, modify the name of a table or database and set the shard key for partitioning based on your business requirements.
  8. In the lower part of the page, click Next: Save Task Settings and Precheck.
    Note
    • Before you can start the data synchronization task, DTS performs a precheck. You can start the data synchronization task only after the task passes the precheck.
    • If the task fails to pass the precheck, click View Details next to each failed item. After you analyze the causes based on the check results, troubleshoot the issues. Then, run a precheck again.
    • If an alert is generated for an item during the precheck, perform the following operations based on the scenario:
      • In scenarios where you cannot ignore the alert item, click View Details next to the failed item. After you analyze the causes based on the check results, troubleshoot the issues. Then, run a precheck again.
      • In scenarios where you can ignore the alert item, click Confirm Alert Details next to the failed item. In the View Details dialog box, click Ignore. In the message that appears, click OK. Then, click Precheck Again to run a precheck again. If you ignore the alert item, data inconsistency may occur, and your business may be exposed to potential risks.
  9. Wait until the success rate becomes 100%. Then, click Next: Purchase Instance.
  10. On the Purchase Instance page, configure the Billing Method and Instance Class parameters for the data synchronization instance. The following table describes the parameters.
    Section Parameter Description
    New Instance Class Billing Method
    • Subscription: You pay for the instance when you create an instance. The subscription billing method is more cost-effective than the pay-as-you-go billing method for long-term use.
    • Pay-as-you-go: A pay-as-you-go instance is charged on an hourly basis. The pay-as-you-go billing method is suitable for short-term use. If you no longer require a pay-as-you-go instance, you can release the pay-as-you-go instance to reduce costs.
    Instance Class DTS provides several instance classes that have different performance in synchronization speed. You can select an instance class based on your business scenario. For more information, see Specifications of data synchronization instances.
    Subscription Duration If you select the subscription billing method, set the subscription duration and the number of instances that you want to create. The subscription duration can be one to nine months or one to three years.
    Note This parameter is displayed only if you select the subscription billing method.
  11. Read and select the check box for Data Transmission Service (Pay-as-you-go) Service Terms.
  12. Click Buy and Start to start the data synchronization task. You can view the progress of the task in the task list.

Schema of a DataHub topic

When DTS synchronizes incremental data to a DataHub topic, DTS adds additional columns to store metadata. The following figure shows the schema of a DataHub topic.
Note In this example, id, name, and address are data fields. DTS adds the dts_ prefix to data fields, including the original data fields that are synchronized from the source database in the destination database, because the previous naming rules for additional columns are used. If you use the new naming rules for additional columns, DTS does not add prefixes to the original data fields that are synchronized from the source database in the destination database.
Schema of a DataHub topic

The following table describes the additional columns in the DataHub topic.

Previous additional column nameNew additional column nameTypeDescription
dts_record_idnew_dts_sync_dts_record_idStringThe ID of the incremental log entry.
Note
  • By default, the ID auto-increments for each new log entry. In disaster recovery scenarios, rollback may occur, and the ID may not auto-increment. Therefore, some IDs may be duplicated.
  • If an UPDATE operation is performed, DTS generates two incremental log entries to record the pre-update and post-update values. The values of the dts_record_id field in the two incremental log entries are the same.
dts_operation_flagnew_dts_sync_dts_operation_flagStringThe operation type. Valid values:
  • INSERT
  • DELETE
  • UPDATE
dts_instance_idnew_dts_sync_dts_instance_idStringThe server ID of the database.
dts_db_namenew_dts_sync_dts_db_nameStringThe name of the database.
dts_table_namenew_dts_sync_dts_table_nameStringThe name of the metatable.
dts_utc_timestampnew_dts_sync_dts_utc_timestampStringThe operation timestamp displayed in UTC. It is also the timestamp of the log file.
dts_before_flagnew_dts_sync_dts_before_flagStringIndicates whether the column values are pre-update values. Valid values: Y and N.
dts_after_flagnew_dts_sync_dts_after_flagStringIndicates whether the column values are post-update values. Valid values: Y and N.

Additional information about the dts_before_flag and dts_after_flag fields

The values of the dts_before_flag and dts_after_flag fields in an incremental log entry vary with different operation types:

  • INSERT

    For an INSERT operation, the column values are the newly inserted record values (post-update values). The value of the dts_before_flag field is N, and the value of the dts_after_flag field is Y.

    INSERT operation
  • UPDATE

    DTS generates two incremental log entries for an UPDATE operation. The two incremental log entries have the same values for the dts_record_id, dts_operation_flag, and dts_utc_timestamp fields.

    The first log entry records the pre-update values. Therefore, the value of the dts_before_flag field is Y, and the value of the dts_after_flag field is N. The second log entry records the post-update values. Therefore, the value of the dts_before_flag field is N, and the value of the dts_after_flag field is Y.

    UPDATE operation
  • DELETE

    For a DELETE operation, the column values are the deleted record values (pre-update values). The value of the dts_before_flag field is Y, and the value of the dts_after_flag field is N.

    DELETE operation

What to do next

After you configure the data synchronization task, you can use Alibaba Cloud Realtime Compute for Apache Flink to analyze the data that is synchronized to the DataHub project. For more information, see What is Alibaba Cloud Realtime Compute for Apache Flink?