All Products
Search
Document Center

Data Transmission Service:Synchronize data from an ApsaraDB RDS for MySQL instance to a DataHub project

Last Updated:Oct 17, 2023

DataHub is a real-time data distribution platform that is designed to process streaming data. You can publish and subscribe to streaming data in DataHub and distribute the data to other platforms. DataHub allows you to analyze streaming data and build applications based on streaming data. This topic describes how to synchronize data from an ApsaraDB RDS for MySQL instance to a DataHub project by using Data Transmission Service (DTS). After you synchronize data, you can use big data services such as Realtime Compute for Apache Flink to analyze the data in real time.

Prerequisites

Limits

Note DTS does not synchronize foreign keys in the source database to the destination database. Therefore, the cascade and delete operations of the source database are not synchronized to the destination database.

Category

Description

Limits on the source database

  • The tables to be synchronized must have PRIMARY KEY or UNIQUE constraints, and all fields must be unique. Otherwise, the destination database may contain duplicate data records.

  • If you select tables as the objects to be synchronized and you need to edit the tables in the destination database, such as renaming tables or columns, you can synchronize up to 1,000 tables in a single data synchronization task. If you run a task to synchronize more than 1,000 tables, a request error occurs. In this case, we recommend that you configure multiple tasks to synchronize the tables in batches or configure a task to synchronize the entire database.

  • The following requirements for binary logs must be met:

    • By default, the binary logging feature is enabled. The binlog_row_image parameter must be set to full. Otherwise, error messages are returned during precheck and the data synchronization task cannot be started. For more information, see Modify instance parameters.

      Important
      • If the source database is a self-managed MySQL database, you must enable the binary logging feature and set the binlog_format parameter to row and the binlog_row_image parameter to full.

      • If the source database is a self-managed MySQL database deployed in a dual-primary cluster, you must set the log_slave_updates parameter to ON. This ensures that DTS can obtain all binary logs. For more information, see Create an account for a self-managed MySQL database and configure binary logging.

    • If you perform only incremental data synchronization, the binary logs of the source database must be retained for at least 24 hours. If you perform both schema synchronization and incremental data synchronization, the binary logs of the source database must be retained for at least seven days. Otherwise, DTS may fail to obtain the binary logs, which causes the task to fail, or even data inconsistency and data loss. After schema synchronization is complete, you can set the retention period to more than 24 hours. Make sure that you set the retention period of binary logs based on the preceding requirements. Otherwise, the SLA of DTS does not guarantee service reliability or performance. For more information about the binary log files of an ApsaraDB RDS for MySQL instance, see Manage binary log files.

Other limits

  • Initial full data synchronization is not supported. DTS does not synchronize the historical data of the required objects from the source ApsaraDB RDS for MySQL instance to the destination DataHub project.

  • If a table to be synchronized in the source database has a column named record_id, we recommend that you use the object name mapping feature to rename the column in the destination database. Otherwise, an error message is returned. For more information, see Map object names.

  • During data synchronization, we recommend that you use only DTS to write data to the destination database. This prevents data inconsistency between the source and destination databases. For example, if you use tools other than DTS to write data to the destination database, data loss may occur in the destination database when you use DMS to perform online DDL operations.

Special cases

If the source database is a self-managed MySQL database, take note of the following items:

  • If you perform a primary/secondary switchover on the source database when the data synchronization task is running, the task fails.

  • DTS calculates synchronization latency based on the timestamp of the latest synchronized data in the destination database and the current timestamp in the source database. If no DML operation is performed on the source database for an extended period of time, the synchronization latency may be inaccurate. If the latency of the data synchronization task is excessively high, you can perform a DML operation on the source database to update the latency.

    Note

    If you select an entire database as the object to be synchronized, you can create a heartbeat table. The heartbeat table is updated or receives data every second.

  • DTS executes the CREATE DATABASE IF NOT EXISTS 'test' statement in the source database as scheduled to move forward the binary log file position.

Supported synchronization topologies

  • One-way one-to-one synchronization

  • One-way one-to-many synchronization

  • One-way many-to-one synchronization

For more information about the synchronization topologies that are supported by DTS, see Synchronization topologies.

SQL operations that can be synchronized

Operation type

SQL statement

DML

INSERT, UPDATE, and DELETE

DDL

ADD COLUMN

Procedure

  1. Go to the Data Synchronization Tasks page.
    1. Log on to the Data Management (DMS) console.
    2. In the top navigation bar, click DTS.
    3. In the left-side navigation pane, choose DTS (DTS) > Data Synchronization.
    Note
  2. From the drop-down list to the right of Data Synchronization Tasks, select the region in which the data synchronization instance resides.
    Note If you use the new DTS console, you must select the region in which the data synchronization instance resides in the top navigation bar.
  3. Click Create Task. On the page that appears, configure the source and destination databases.

    Warning

    After you configure the source and destination databases, we recommend that you read the limits that are displayed in the upper part of the page. Otherwise, the task may fail or data inconsistency may occur.

    Section

    Parameter

    Description

    N/A

    Task Name

    The name of the task. DTS automatically assigns a name to the task. We recommend that you specify a descriptive name that makes it easy to identify the task. You do not need to specify a unique task name.

    Source Database

    Select an existing DMS database instance

    The database instance that you want to use. You can choose whether to select an existing instance based on your business requirements.

    • If you select an existing instance, DTS automatically populates the parameters for the database.

    • If you do not select an existing instance, you must manually configure parameters for the database.

    Database Type

    The type of the source database. Select MySQL.

    Access Method

    The access method of the source database. Select Alibaba Cloud Instance.

    Instance Region

    The region in which the source ApsaraDB RDS for MySQL instance resides.

    Replicate Data Across Alibaba Cloud Accounts

    Specifies whether to synchronize data across Alibaba Cloud accounts. In this example, No is selected.

    RDS Instance ID

    The ID of the source ApsaraDB RDS for MySQL instance.

    Database Account

    The database account of the source ApsaraDB RDS for MySQL instance. The account must have read permissions on the objects to be synchronized.

    Database Password

    The password of the database account.

    Encryption

    Specifies whether to encrypt the connection to the database. Select Non-encrypted or SSL-encrypted based on your business requirements. If you select SSL-encrypted, you must enable SSL encryption for the ApsaraDB RDS for MySQL instance before you configure the data synchronization task. For more information, see Configure the SSL encryption feature.

    Destination Database

    Select an existing DMS database instance

    The database instance that you want to use. You can choose whether to select an existing instance based on your business requirements.

    • If you select an existing instance, DTS automatically populates the parameters for the database.

    • If you do not select an existing instance, you must manually configure parameters for the database.

    Database Type

    The type of the destination database. Select DataHub.

    Access Method

    The access method of the destination database. Select Alibaba Cloud Instance.

    Instance Region

    The region in which the DataHub project resides.

    Project

    The destination DataHub project.

  4. In the lower part of the page, click Test Connectivity and Proceed.

    If the source or destination database is an Alibaba Cloud database instance, such as an ApsaraDB RDS for MySQL or ApsaraDB for MongoDB instance, DTS automatically adds the CIDR blocks of DTS servers to the whitelist of the instance. If the source or destination database is a self-managed database hosted on an Elastic Compute Service (ECS) instance, DTS automatically adds the CIDR blocks of DTS servers to the security group rules of the ECS instance, and you must ensure that the ECS instance can access the database. If the source or destination database is a self-managed database that is deployed in a data center or provided by a third-party cloud service provider, you must manually add the CIDR blocks of DTS servers to the whitelist of the database to allow DTS to access the database. For more information, see Add the CIDR blocks of DTS servers to the security settings of on-premises databases.

    Warning

    If the CIDR blocks of DTS servers are automatically or manually added to the whitelist of the database or instance, or to the ECS security group rules, security risks may arise. Therefore, before you use DTS to synchronize data, you must understand and acknowledge the potential risks and take preventive measures, including but not limited to the following measures: enhancing the security of your username and password, limiting the ports that are exposed, authenticating API calls, regularly checking the whitelist or ECS security group rules and forbidding unauthorized CIDR blocks, or connecting the database to DTS by using Express Connect, VPN Gateway, or Smart Access Gateway.

  5. Configure the objects to be migrated and advanced settings.

    Parameter

    Description

    Synchronization Type

    By default, Incremental Data Synchronization is selected. You can select only Schema Synchronization. You cannot select Full Data Synchronization.

    Note

    During initial schema synchronization, DTS synchronizes the schemas of the selected objects such as tables from the source database to the destination DataHub instance.

    Processing Mode of Conflicting Tables

    • Precheck and Report Errors: checks whether the destination database contains tables that have the same names as tables in the source database. If the source and destination databases do not contain tables that have identical table names, the precheck is passed. Otherwise, an error is returned during the precheck and the data synchronization task cannot be started.

      Note

      You can use the object name mapping feature to rename the tables that are synchronized to the destination database. You can use this feature if the source and destination databases contain identical table names and the tables in the destination database cannot be deleted or renamed. For more information, see Map object names.

    • Ignore Errors and Proceed: skips the precheck for identical table names in the source and destination databases.

      Warning

      If you select Ignore Errors and Proceed, data inconsistency may occur, and your business may be exposed to potential risks.

      • If the source and destination databases have the same schemas, and a data record has the same primary key value as an existing data record in the destination database:

        • During full data synchronization, DTS does not synchronize the data record to the destination database. The existing data record in the destination database is retained.

        • During incremental data synchronization, DTS synchronizes the data record to the destination database. The existing data record in the destination database is overwritten.

      • If the source and destination databases have different schemas, data may fail to be initialized, only some columns are synchronized, or the data synchronization task fails. Operate with caution.

    • Precheck and Report Errors: checks whether the destination database contains collections that have the same names as collections in the source database. If the source and destination databases do not contain collections that have identical collection names, the precheck is passed. Otherwise, an error is returned during the precheck and the data synchronization task cannot be started.

      Note

      You can use the object name mapping feature to rename the collections that are synchronized to the destination database. You can use this feature if the source and destination databases contain collections that have identical names and the collections in the destination database cannot be deleted or renamed. For more information, see Rename an object to be synchronized.

    • Ignore Errors and Proceed: skips the precheck for identical collection names in the source and destination databases.

      Warning

      If you select Ignore Errors and Proceed, data inconsistency may occur, and your business may be exposed to potential risks.

      • DTS does not synchronize data records that have the same primary key values as data records in the destination database.

      • Data may fail to be initialized, only some columns are synchronized, or the data synchronization task fails.

    • Precheck and Report Errors: checks whether the destination database contains tables that have the same names as tables in the source database. If the source and destination databases do not contain tables that have identical table names, the precheck is passed. Otherwise, an error is returned during the precheck and the data synchronization task cannot be started.

    • Ignore Errors and Proceed: skips the precheck for identical table names in the source and destination databases.

      Warning

      If you select Ignore Errors and Proceed, data inconsistency may occur, and your business may be exposed to potential risks.

      • If the source and destination databases have the same schemas, and a data record has the same primary key value as an existing data record in the destination database:

        • During full data synchronization, DTS does not synchronize the data record to the destination database. The existing data record in the destination database is retained.

        • During incremental data synchronization, DTS synchronizes the data record to the destination database. The existing data record in the destination database is overwritten.

      • If the source and destination databases have different schemas, data may fail to be initialized, only some columns are synchronized, or the data synchronization task fails. Operate with caution.

    Naming Rules of Additional Columns

    During data synchronization to a DataHub project, DTS adds additional columns to the destination table. If the names of additional columns are the same as the names of existing columns in the destination table, data synchronization fails. Select New Rule or Previous Rule based on your business requirements.

    Warning

    Before you specify this parameter, check whether additional columns and existing columns in the destination table have name conflicts. For more information, see Naming rules for additional columns.

    Capitalization of Object Names in Destination Instance

    The capitalization of database names, table names, and column names in the destination instance. By default, DTS default policy is selected. You can select other options to ensure that the capitalization of object names is consistent with that in the source or destination database. For more information, see Specify the capitalization of object names in the destination instance.

    Source Objects

    Select one or more objects from the Source Objects section and click the Rightwards arrow icon to add the objects to the Selected Objects section.

    Note

    You can select only tables as the objects to be synchronized.

    Selected Objects

    • To rename an object that you want to synchronize to the destination instance, right-click the object in the Selected Objects section. For more information, see Map the name of a single object.

    • To rename multiple objects at a time, click Batch Edit in the upper-right corner of the Selected Objects section. For more information, see Map multiple object names at a time.

    Note

    To specify WHERE conditions to filter data, right-click an object in the Selected Objects section. In the dialog box that appears, specify the conditions. For more information, see Use SQL conditions to filter data.

  6. Optional:In the Selected Objects section, move the pointer over the object to be synchronized and right-click the object. In the Edit dialog box, set the shard key. The shard key is used for partitioning.

  7. After you select the objects to be synchronized, click Next: Advanced Settings.

    Parameter

    Description

    Set Alerts

    Specifies whether to configure alerting for the data synchronization task. If the task fails or the synchronization latency exceeds the specified threshold, alert contacts will receive notifications. Valid values:

    Specify the retry time range for failed connections

    The retry time range for failed connections. If the source or destination database fails to be connected after the data synchronization task is started, DTS immediately retries a connection within the time range. Valid values: 10 to 1440. Unit: minutes. Default value: 720. We recommend that you set the parameter to a value greater than 30. If DTS reconnects to the source and destination databases within the specified time range, DTS resumes the data synchronization task. Otherwise, the data synchronization task fails.

    Note
    • If you set different retry time ranges for multiple DTS tasks that have the same source or destination database, the shortest retry time range that is set takes precedence.

    • When DTS retries a connection, you are charged for the DTS instance. We recommend that you specify the retry time range based on your business requirements. You can also release the DTS instance at your earliest opportunity after the source and destination instances are released.

    Configure ETL

    Specifies whether to configure the extract, transform, and load (ETL) feature. For more information, see What is ETL?. Valid values:
  8. Click Next: Save Task Settings and Precheck in the lower part of the page.

    You can move the pointer over Next: Save Task Settings and Precheck and click Preview OpenAPI parameters to view the parameter settings of the API operation that is called to configure the instance.

    Note
    • Before you can start the data synchronization task, DTS performs a precheck. You can start the data synchronization task only after the task passes the precheck.

    • If the task fails to pass the precheck, click View Details next to each failed item. After you troubleshoot the issues based on the causes, run a precheck again.

    • If an alert is triggered for an item during the precheck:

      • If an alert item cannot be ignored, click View Details next to the failed item and troubleshoot the issues. Then, run a precheck again.

      • If an alert item can be ignored, click Confirm Alert Details. In the View Details dialog box, click Ignore. In the message that appears, click OK. Then, click Precheck Again to run a precheck again. If you ignore the alert item, data inconsistency may occur, and your business may be exposed to potential risks.

  9. Wait until the success rate becomes 100%. Then, click Next: Purchase Instance.

  10. On the Purchase Instance page, configure the Billing Method and Instance Class parameters for the data synchronization instance. The following table describes the parameters.

    Section

    Parameter

    Description

    New Instance Class

    Billing Method

    • Subscription: You pay for the instance when you create an instance. The subscription billing method is more cost-effective than the pay-as-you-go billing method for long-term use.

    • Pay-as-you-go: A pay-as-you-go instance is charged on an hourly basis. The pay-as-you-go billing method is suitable for short-term use. If you no longer require a pay-as-you-go instance, you can release the pay-as-you-go instance to reduce costs.

    Resource Group

    The resource group on which the instance is run. Default value: default resource group. For more information, see What is Resource Management?.

    Instance Class

    DTS provides various synchronization specifications that provide different performance. The synchronization speed varies based on the synchronization specifications that you select. You can select a synchronization specification based on your business scenario. For more information, see Specifications of data synchronization instances.

    Subscription Duration

    If you select the subscription billing method, set the subscription duration and the number of instances that you want to create. The subscription duration can be one to nine months, one year, two years, three years, or five years.

    Note

    This parameter is available only if you select the Subscription billing method.

  11. Read and select the check box to agree to the Data Transmission Service (Pay-as-you-go) Service Terms.

  12. Click Buy and Start to start the data synchronization task. You can view the progress of the task in the task list.

Schema of a DataHub topic

When DTS synchronizes incremental data to a DataHub topic, DTS adds additional columns to store metadata. The following figure shows the schema of a DataHub topic.

Note

In this example, id, name, and address are data fields. DTS adds the dts_ prefix to data fields because the naming rules of the previous version for additional columns are used.

Schema of a DataHub topic

The following table describes the additional columns in the DataHub topic.

Previous additional column name

New additional column name

Type

Description

dts_record_id

new_dts_sync_dts_record_id

String

The unique ID of the incremental log entry.

Note
  • By default, the ID auto-increments for each new log entry. In disaster recovery scenarios, rollback may occur, and the ID may not auto-increment. Therefore, some IDs may be duplicated.

  • If an UPDATE operation is performed, DTS generates two incremental log entries to record the pre-update and post-update values. The values of the dts_record_id field in the two incremental log entries are the same.

dts_operation_flag

new_dts_sync_dts_operation_flag

String

The operation type. Valid values:

  • I: an INSERT operation

  • D: a DELETE operation

  • U: an UPDATE operation

dts_instance_id

new_dts_sync_dts_instance_id

String

The server ID of the database.

dts_db_name

new_dts_sync_dts_db_name

String

The name of the database.

dts_table_name

new_dts_sync_dts_table_name

String

The table name.

dts_utc_timestamp

new_dts_sync_dts_utc_timestamp

String

The operation timestamp in UTC. It is also the timestamp of the binary log file.

dts_before_flag

new_dts_sync_dts_before_flag

String

Indicates whether the column values are pre-update values. Valid values: Y and N.

dts_after_flag

new_dts_sync_dts_after_flag

String

Indicates whether the column values are post-update values. Valid values: Y and N.

Additional information about the dts_before_flag and dts_after_flag fields

The values of the dts_before_flag and dts_after_flag fields in an incremental log entry vary with different operation types:

  • INSERT

    For an INSERT operation, the column values are the newly inserted record values (post-update values). The value of the dts_before_flag field is N, and the value of the dts_after_flag field is Y.

    INSERT operation
  • UPDATE

    DTS generates two incremental log entries for an UPDATE operation. The two incremental log entries have the same values for the dts_record_id, dts_operation_flag, and dts_utc_timestamp fields.

    The first log entry records the pre-update values. Therefore, the value of the dts_before_flag field is Y, and the value of the dts_after_flag field is N. The second log entry records the post-update values. Therefore, the value of the dts_before_flag field is N, and the value of the dts_after_flag field is Y.

    UPDATE operation
  • DELETE

    For a DELETE operation, the column values are the deleted record values (pre-update values). The value of the dts_before_flag field is Y, and the value of the dts_after_flag field is N.

    DELETE operation