All Products
Search
Document Center

Dataphin:Configuring the openGauss input component

Last Updated:May 28, 2025

The openGauss input component reads data from an openGauss data source. When you need to synchronize data from an openGauss data source to other data sources, you must first configure the source data source for the openGauss input component, and then configure the target data source. This topic describes how to configure the openGauss input component.

Prerequisites

Procedure

  1. In the top navigation bar of the Dataphin homepage, choose Development > Data Integration.

  2. In the top navigation bar of the Integration page, select a project (In Dev-Prod mode, you need to select an environment).

  3. In the left-side navigation pane, click Batch Pipeline. In the Batch Pipeline list, click the offline pipeline that you want to develop to open its configuration page.

  4. Click Component Library in the upper-right corner of the page to open the Component Library panel.

  5. In the left-side navigation pane of the Component Library panel, select Inputs. Find the openGauss component in the input component list on the right, and drag it to the canvas.

  6. Click the image icon in the openGauss input component card to open the openGauss Input Configuration dialog box.

  7. In the OpenGauss Input Configuration dialog box, configure the parameters.

    Parameter

    Description

    Step Name

    The name of the openGauss input component. Dataphin automatically generates a step name, which you can modify based on your business scenario. The name must meet the following requirements:

    • It can contain only Chinese characters, letters, underscores (_), and digits.

    • It cannot exceed 64 characters in length.

    Datasource

    The data source dropdown list displays all openGauss data sources in the current Dataphin instance, including those for which you have and do not have the read-through permission. Click the image icon to copy the current data source name.

    Schema

    Cross-schema table reading is supported. Select the schema where the source table is located.

    Source Table Quantity

    Select the source table quantity. The source table quantity includes Single Table and Multiple Tables:

    • Single Table: This option is applicable to scenarios where business data from one table is synchronized to one target table.

    • Multiple Tables: This option is applicable to scenarios where business data from multiple tables is synchronized to the same target table. When data from multiple tables is written to the same data table, the union algorithm is used.

    Table

    Select the source table:

    • If you select Single Table for Source Table Quantity, you can enter a table name keyword to search for the table. Click the image icon to copy the name of the selected table.

    • If you select Multiple Tables for Source Table Quantity, perform the following operations to add tables.

      1. In the input box, enter a table expression to filter tables with the same structure.

        The system supports enumeration, regular expression-like format, and a combination of both. For example, table_[001-100];table_102.

      2. Click Exact Match. In the Confirm Match Details dialog box, view the list of matched tables.

      3. Click OK.

    Shard Key

    You can use a column of the integer type in the source data table as the shard key. We recommend that you use the primary key or a column with an index as the shard key. When reading data, the system shards the data based on the configured shard key field to implement concurrent reading, which can improve data synchronization efficiency.

    Batch Read Count

    The number of records to read at a time. When reading data from the source database, you can configure a specific batch read count (such as 1,024 records) instead of reading records one by one. This reduces the number of interactions with the data source, improves I/O efficiency, and reduces network latency.

    Input Filter

    Enter the filter information for the input fields, for example, ds=${bizdate}. Input Filter is applicable to the following two scenarios:

    • Fixed part of data.

    • Parameter filtering.

    Output Fields

    The Output Fields section displays all fields in the selected table and those that match the filter condition. You can perform the following operations:

    • Field Management: If you do not need to output certain fields to downstream components, you can delete these fields:

      • Single field deletion scenario: If you need to delete a small number of fields, you can click the sgaga icon in the Operation column to delete the unnecessary fields.

      • Batch field deletion scenario: If you need to delete many fields, you can click Field Management. In the Field Management dialog box, select multiple fields, click the image left arrow icon to move the selected input fields to the unselected input fields, and then click OK to complete the batch deletion of fields.

        image..png

    • Batch Add: Click Batch Add to configure fields in JSON, TEXT, or DDL format.

      Note

      After you complete the batch addition and click OK, the configured field information will be overwritten.

      • Configure fields in JSON format, for example:

        // Example:
          [{
             "index": 1,
             "name": "id",
             "type": "int(10)",
             "mapType": "Long",
             "comment": "comment1"
           },
           {
             "index": 2,
             "name": "user_name",
             "type": "varchar(255)",
             "mapType": "String",
             "comment": "comment2"
         }]
        Note

        index indicates the column number of the specified object, name indicates the field name after import, and type indicates the field type after import. For example, "index":3,"name":"user_id","type":"String" indicates that the fourth column in the file is imported, the field name is user_id, and the field type is String.

      • Configure fields in TEXT format, for example:

        // Example:
        1,id,int(10),Long,comment1
        2,user_name,varchar(255),Long,comment2
        • The row delimiter is used to separate the information of each field. The default is a line feed (\n). Line feed (\n), semicolon (;), and period (.) are supported.

        • The column delimiter is used to separate the field name and field type. The default is a comma (,). The system supports ','. The field type can be omitted. The default is ','.

      • Configure fields in DDL format, for example:

        CREATE TABLE tablename (
        	user_id serial,
        	username VARCHAR(50),
        	password VARCHAR(50),
        	email VARCHAR (255),
        	created_on TIMESTAMP,
        );
    • Create Output Field: Click +Create Output Field, and fill in Column, Type, and Description, and select Mapping Type as prompted. After you complete the configuration of the current row, click the image icon to save it.

  8. Click OK to complete the property configuration of the OpenGauss Input Component.