All Products
Search
Document Center

ApsaraDB for OceanBase:Synchronize data from OceanBase Database to a DataHub instance

Last Updated:Apr 03, 2024

This topic describes how to synchronize data from a MySQL or Oracle tenant of OceanBase Database to a DataHub instance.

Prerequisites

  • The data transmission service has the privilege to access cloud resources. For more information, see Grant privileges to roles for data transmission.

  • You have created a dedicated database user for data synchronization in the source OceanBase database and granted corresponding privileges to the user. For more information, see Create a database user.

Limitations

  • The data transmission service supports full synchronization of only tables with unique keys.

  • DDL synchronization support only BLOB topics.

  • During data synchronization, the data transmission service allows you to drop a table before creating a new one. In other words, you can execute DROP TABLE and then execute CREATE TABLE. The data transmission service does not allow you to create a new table by renaming a table. In other words, you cannot execute RENAME TABLE a TO a_tmp.

  • The data transmission service supports synchronization of data of the UTF8 and GBK character sets.

  • The name of a table to be synchronized, as well as the names of columns in the table, must not contain Chinese characters.

  • The data transmission service supports the migration of only objects whose database name, table name, and column name are ASCII-encoded without special characters. The special characters are line breaks, spaces, and the following characters: . | " ' ` ( ) = ; / &

  • The data transmission service does not support a standby OceanBase database as the source.

DataHub has the following limitations:

DataHub limits the size of a message based on the cloud environment, usually to 1 MB. DataHub sends messages in batches, with each batch sized no more than 4 MB.

Considerations

  • In a data synchronization project where the source is an OceanBase database and DDL synchronization is enabled, if a RENAME operation is performed on a table in the source, we recommend that you restart the project to avoid data loss during incremental synchronization.

  • Take note of the following items when an updated row contains a LOB column:

    • If the LOB column is updated, do not use the value stored in the LOB column before the UPDATE or DELETE operation.

      The following data types are stored in LOB columns: JSON, GIS, XML, user-defined type (UDT), and TEXT such as LONGTEXT and MEDIUMTEXT.

    • If the LOB column is not updated, the value stored in the LOB column before and after the UPDATE or DELETE operation is NULL.

  • When you synchronize incremental data from an OceanBase database to a DataHub instance, the MySQL schema is synchronized to the DataHub schema and then initialized. The following table lists the data types supported by DataHub. These data types apply only to tuple topics.

    Type

    Description

    Value range

    BIGINT

    An 8-byte signed integer.

    -9223372036854775807 to 9223372036854775807

    DOUBLE

    An 8-byte double-precision floating-point number.

    -1.0 _10^308 to 1.0 _10^308

    BOOLEAN

    A Boolean value.

    • True or False

    • true or false

    • 0 or 1

    TIMESTAMP

    A timestamp value.

    It is a value accurate to microseconds.

    STRING

    A string that supports only UTF-8 encoding.

    A single STRING column supports a maximum of 2 MB of data.

    INTEGER

    A 4-byte integer.

    -2147483648 to 2147483647

    FLOAT

    A 4-byte single-precision floating-point number.

    -3.40292347_10^38 to 3.40292347_10^38

    DECIMAL

    A digital value.

    - 10^38 +1 to 10^38 - 1

Supported source and destination instance types

The following table lists the supported instance types for the source and destination. OceanBase Database has two types of tenants: MySQL and Oracle. The source can be an OceanBase cluster instance or a self-managed database in a virtual private cloud (VPC).

Source

Destination

OceanBase Database

DataHub instance (DataHub instance on Alibaba Cloud)

OceanBase Database

DataHub instance (self-managed DataHub instance in a VPC)

OceanBase Database

DataHub instance (DataHub instance in the public network)

Supported DDL operations

Important

DDL synchronization support only BLOB topics.

  • ALTER TABLE

    • ADD COLUMN

    • MODIFY COLUMN

    • DROP COLUMN

  • CREATE INDEX

  • DROP INDEX

  • TRUNCATE TABLE

    Note

    In delayed deletion, the same transaction contains two identical TRUNCATE TABLE DDL statements. In this case, idempotence is implemented for downstream consumption.

Data type mappings

A project that synchronizes data to a DataHub instance supports only the following data types: INTEGER, BIGINT, TIMESTAMP, FLOAT, DOUBLE, DECIMAL, STRING, and BOOLEAN.

  • If you create a topic of another type when you set topic mapping, data synchronization will fail.

  • The following table describes the default mapping rules, which are the most appropriate. If you change the mapping, an error may occur.

Data type mappings between MySQL tenants of OceanBase Database and DataHub instances

MySQL tenant of OceanBase Database

Default mapped-to data type in DataHub

BIT

STRING (Base64-encoded)

CHAR

STRING

BINARY

STRING (Base64-encoded)

VARBINARY

STRING (Base64-encoded)

INT

BIGINT

TINYINT

BIGINT

SMALLINT

BIGINT

MEDIUMINT

BIGINT

BIGINT

DECIMAL (This data type is used because the maximum unsigned value exceeds the maximum LONG value in Java.)

FLOAT

DECIMAL

DOUBLE

DECIMAL

DECIMAL

DECIMAL

DATE

STRING

TIME

STRING

YEAR

BIGINT

DATETIME

STRING

TIMESTAMP

TIMESTAMP (accurate to milliseconds)

VARCHAR

STRING

TINYBLOB

STRING (Base64-encoded)

TINYTEXT

STRING

BLOB

STRING (Base64-encoded)

TEXT

STRING

MEDIUMBLOB

STRING (Base64-encoded)

MEDIUMTEXT

STRING

LONGBLOB

STRING (Base64-encoded)

LONGTEXT

STRING

ENUM

STRING

SET

STRING

Data type mapping between Oracle tenants of OceanBase Database and DataHub databases

Oracle tenant of OceanBase Database

Default mapped-to data type in DataHub

CHAR

STRING

NCHAR

STRING

VARCHAR2

STRING

NVARCHAR2

STRING

CLOB

STRING

NCLOB

STRING

BLOB

STRING (Base64-encoded)

NUMBER

DECIMAL

BINARY_FLOAT

DECIMAL

BINARY_DOUBLE

DECIMAL

DATE

STRING

TIMESTAMP

STRING

TIMESTAMP WITH TIME ZONE

STRING

TIMESTAMP WITH LOCAL TIME ZONE

STRING

INTERVAL YEAR TO MONTH

STRING

INTERVAL DAY TO SECOND

STRING

LONG

STRING (Base64-encoded)

RAW

STRING (Base64-encoded)

LONG RAW

STRING (Base64-encoded)

ROWID

STRING

UROWID

STRING

FLOAT

DECIMAL

Supplemental properties

If you manually create a topic, add the following properties to the DataHub schema before you start a data synchronization project. If the data transmission service automatically creates a topic and synchronizes the schema, the data transmission service automatically adds the following properties.

Important

The following table applies only to tuple topics.

Parameter

Type

Description

oms_timestamp

STRING

The time when the change was made.

oms_table_name

STRING

The new table name of the source table.

oms_database_name

STRING

The new database name of the source database.

oms_sequence

STRING

The timestamp at which data is synchronized to the process memory. The value of this field consists of time and five incremental digits. A clock rollback will result in data inconsistency.

oms_record_type

STRING

The change type. Valid values: UPDATE, INSERT, and DELETE.

oms_is_before

STRING

Specifies whether the data is the original data when the change type is UPDATE. Y indicates that the data is the original data.

oms_is_after

STRING

Specifies whether the data is the modified data when the change type is UPDATE. Y indicates that the data is the modified data.

Procedure

  1. Log on to the ApsaraDB for OceanBase console and purchase a data synchronization project.

    For more information, see Purchase a data synchronization project.

  2. Choose Data Transmission > Data Synchronization. On the page that appears, click Configure for the data synchronization project.

    image.png

    If you want to reference the configurations of an existing project, click Reference Configuration. For more information, see Reference and clear data synchronization project configurations.

  3. On the Select Source and Destination page, configure the parameters.

    image.png

    Parameter

    Description

    Synchronization Project Name

    We recommend that you set it to a combination of digits and letters. It must not contain any spaces and cannot exceed 64 characters in length.

    Tag (Optional)

    Click the field and select a target tag from the drop-down list. You can also click Manage Tags to create, modify, and delete tags. For more information, see Manage data synchronization projects by using tags.

    Source

    If you have created an OceanBase data source, select it from the drop-down list. Otherwise, click New Data Source in the drop-down list to create one in the dialog box on the right side. For more information about the parameters, see Create an OceanBase data source.

    Important

    The source must not be an OceanBase Database tenant instance.

    Destination

    If you have created a DataHub data source, select it from the drop-down list. Otherwise, click New Data Source in the drop-down list to create one in the dialog box on the right side. For more information about parameters, see Create a DataHub data source.

  4. Click Next. On the Select Synchronization Type page, specify the synchronization type for the current data synchronization project.

    image.png

    Valid values: Schema Synchronization, Full Synchronization, and Incremental Synchronization. Schema synchronization creates a topic. Options for Incremental Synchronization are DML Synchronization and DDL Synchronization.

    • Options for DML Synchronization are Insert, Delete, and Update, which are all selected by default. For more information, see DML filtering.

    • If you select DDL Synchronization, you can select only BLOB topics on the Select Synchronization Objects page. For more information, see Synchronize DDL operations.

      image.png

  5. Click Next. On the Select Synchronization Objects page, select the topic type and objects to be synchronized in the current data synchronization project.

    Available topic types are Tuple and BLOB. Tuple topics do not support DDL synchronization. Tuple topics contain records that are similar to data records in databases. Each record contains multiple columns. You can only write a block of binary data as a record to a BLOB topic. The data is Base64-encoded for transmission. For more information, visit the documentation center of DataHub.

    Select the type of topics to be synchronized and perform the following steps:

    1. In the left-side pane, select the objects to be synchronized.

    2. Click >.

    3. Select a mapping method.

      • If you want to synchronize a single Tuple or BLOB table, select a mapping method as needed in the Map Object to Topic dialog box and click OK.

        image.png

        If you have not selected Schema Synchronization as the synchronization type, you can select only Existing Topics. If you have selected Schema Synchronization as the synchronization type, you can select only one mapping method to create or select topics.

        For example, if you have selected Schema Synchronization, when you use both the Create Topic and Select Topic mapping methods or rename the topic, a precheck error will be returned due to option conflicts.

        Parameter

        Description

        Create Topic

        Enter the name of the new topic in the text box. The topic name can contain letters, digits, and underscores (_) and must start with a letter. It must not exceed 128 characters in length.

        Select Topic

        The data transmission service allows you to query DataHub topics. You can click Select Topic, and then find and select the topics to be synchronized from the Existing Topics drop-down list.

        Batch Generate Topics

        The format for generating topics in batches is Topic_${Database Name}_${Table Name}.

        If you select Create Topic or Batch Generate Topics, you can query the newly created topics in the DataHub instance after schema synchronization is completed. By default, each DataHub topic has two partitions and the data expiration period is 7 days, which cannot be modified.

      • If you want to synchronize multiple Tuple tables, click OK in the dialog box that appears.

        image.png

        If you have selected a Tuple topic and multiple tables without selecting Schema Synchronization, you must select a topic and click OK in the Map Object to Topic dialog box.

        image.png

        In this case, multiple tables are displayed under the topic in the right pane, but only one table can be synchronized. Then, click Next. A prompt appears, indicating that only one-to-one mapping is supported between Tuple topics and tables.

        image.png

    The data transmission service allows you to import objects by using text. It also allows you to rename objects, set row filters, and remove a single object or all objects. Objects in the destination database are listed in the structure of Topic > Database > Table.

    image.png

    Operation

    Description

    Import Objects

    1. In the list on the right, click Import Objects in the upper-right corner.

    2. In the dialog box that appears, click OK.

      Important

      This operation will overwrite previous selections. Proceed with caution.

    3. In the Import Synchronization Objects dialog box, import the objects to be synchronized. You can import CSV files to rename databases or tables and set row filtering conditions. For more information, see Download and import the settings of synchronization objects.

    4. Click Validate.

    5. After the validation is passed, click OK.

    Change Topic

    When the topic type is set to BLOB, you can change topics for objects in the destination database. For more information, see Change topics.

    Settings

    You can use the WHERE clause to filter data by row, select sharding columns, and select the columns to be synchronized.

    1. In the list on the right, move the pointer over the table object that you want to set.

    2. Click Settings.

    3. In the Settings dialog box, you can perform the following operations:

      • In the Row Filters section, specify a standard SQL WHERE clause to filter data by row. For more information, see Use SQL conditions to filter data.

      • Select the sharding columns that you want to use from the Sharding Columns drop-down list. You can select multiple fields as sharding columns. This parameter is optional.

        Unless otherwise specified, select the primary keys as sharding columns. If the primary keys are not load-balanced, select load-balanced fields with unique identifiers as sharding columns to avoid potential performance issues. Sharding columns can be used for the following purposes:

        • Load balancing: Threads used for sending messages can be recognized based on the sharding columns if the destination table supports concurrent writes.

        • Orderliness: The data transmission service ensures that messages are received in order if the values of the sharding columns are the same. The orderliness specifies the sequence of executing DML statements for a column.

      • In the Select Columns section, select the columns to be synchronized. For more information, see Column filtering.

    4. Click OK.

    Remove/Remove All

    The data transmission service allows you to remove a single object or all migration objects that are added to the right-side list during data mapping.

    • Remove a single synchronization object

      In the list on the right, move the pointer over the object that you want to remove, and click Remove to remove the synchronization object.

    • Remove all synchronization objects

      In the list on the right, click Remove All in the upper-right corner. In the dialog box that appears, click OK to remove all synchronization objects.

  6. Click Next. On the Synchronization Options page, configure the parameters.

    image.png

    Parameter

    Description

    Incremental Synchronization Start Timestamp

    • If you have selected Full Synchronization as the synchronization type, the default value of this parameter is the start time of incremental synchronization and cannot be modified.

    • If you have not selected Full Synchronization as the synchronization type, set this parameter to a certain point of time, which is the current system time by default. For more information, see Set an incremental synchronization timestamp.

    Serialization Method

    The message format for synchronizing data to the destination DataHub instance. Valid values: Default, Canal, DataWorks (version 2.0 supported), SharePlex, DefaultExtendColumnType, Debezium, DebeziumFlatten, and DebeziumSmt. For more information, see Data formats used in serialization methods.

    Important
    • This parameter is available only if you have set the topic type to BLOB on the Select Synchronization Objects page.

    • Only MySQL tenants of OceanBase Database support Debezium, DebeziumFlatten, and DebeziumSmt.

    Partitioning Rules

    The rule for synchronizing data from the source database to a DataHub topic. Valid values: Hash and Table. We recommend that you select Table to ensure DDL and DML consumption consistency when downstream applications are consuming messages.

    • Hash indicates that the data transmission service uses a hash algorithm to select the shard of a DataHub topic based on the value of the primary key or sharding column.

    • Table indicates that the data transmission service delivers all data in a table to the same partition and uses the table name as the hash key.

    Note

    If you have selected DDL Synchronization on the Select Synchronization Type page, the partitioning rule applies only to Table.

    Business System Identification (Optional)

    Identifies the source business system of data. The business system identifier consists of 1 to 20 characters.

  7. Click Precheck.

    During the precheck, the data transmission service checks the column name and column type, and checks whether the values are null. The data transmission service does not check the value length or default value. You can perform the following operations if an error is returned during the precheck:

    • Identify and troubleshoot the problem and then perform the precheck again.

    • Click Skip in the Actions column of the failed precheck item. A dialog box appears, prompting you about the impact. If you want to skip this operation, click OK.

  8. After the precheck is passed, click Start Project.

    If you do not need to start the project now, click Save. You can manually start the project on the Synchronization Projects page or by performing batch operations later. For more information about the batch operations, see Perform batch operations on data synchronization projects.

    The data transmission service allows you to modify the synchronization objects when a synchronization project is running. For more information, see View and modify synchronization objects. After the data synchronization project is started, it will be executed based on the selected synchronization types. For more information, see View synchronization details.

If the data synchronization project encounters a running exception due to a network failure or slow startup of processes, you can click Recover on the Synchronization Projects page or on the Details page of the synchronization project.

References