All Products
Search
Document Center

Dataphin:Configure Amazon RDS for PostgreSQL input component

Last Updated:May 28, 2025

The Amazon RDS for PostgreSQL input component reads data from Amazon RDS for PostgreSQL data sources. When you need to synchronize data from an Amazon RDS for PostgreSQL data source to other data sources, you must first configure the source data source that the Amazon RDS for PostgreSQL input component reads from, and then configure the destination data source. This topic describes how to configure the Amazon RDS for PostgreSQL input component.

Prerequisites

Procedure

  1. In the top navigation bar of the Dataphin homepage, choose Development > Data Integration.

  2. In the top navigation bar of the Integration page, select Project (In Dev-Prod mode, you need to select an environment).

  3. In the left-side navigation pane, click Batch Pipeline. In the Batch Pipeline list, click the offline pipeline that you want to develop to open its configuration page.

  4. Click Component Library in the upper-right corner of the page to open the Component Library panel.

  5. In the left-side navigation pane of the Component Library panel, select Inputs. Find the Amazon RDS for PostgreSQL component in the input component list on the right, and drag it to the canvas.

  6. Click the image icon in the Amazon RDS for PostgreSQL input component card to open the Amazon RDS for PostgreSQL Input Configuration dialog box.

  7. In the Amazon RDS For PostgreSQL Input Configuration dialog box, configure the parameters.

    Parameter

    Description

    Step Name

    The name of the Amazon RDS for PostgreSQL input component. Dataphin automatically generates a step name, which you can modify based on your business scenario. The name must meet the following requirements:

    • It can contain only Chinese characters, letters, underscores (_), and digits.

    • It can be up to 64 characters in length.

    Datasource

    The data source dropdown list displays all Amazon RDS for PostgreSQL data sources, including those for which you have read-through permission and those for which you do not. Click the image icon to copy the current data source name.

    Schema (optional)

    You can select tables across schemas. Select the schema where the table is located. If you do not specify a schema, the system uses the schema configured in the data source by default.

    Source Table Quantity

    Select the source table quantity. The options include Single Table and Multiple Tables:

    • Single Table: This option is suitable for scenarios where business data from one table is synchronized to one destination table.

    • Multiple Tables: This option is suitable for scenarios where business data from multiple tables is synchronized to the same destination table. When data from multiple tables is written to the same data table, the union algorithm is used.

      For more information about union, see INTERSECT, UNION, and EXCEPT.

    Table

    Select the source table:

    • If you selected Single Table for Source Table Quantity, you can enter a keyword to search for tables or enter the exact table name and click Exact Match. After you select a table, the system automatically checks the table status. Click the image icon to copy the name of the selected table.

    • If you selected Multiple Tables for Source Table Quantity, perform the following steps to add tables.

      1. In the input box, enter a table expression to filter tables with the same structure.

        The system supports enumeration format, regular expression-like format, and a combination of both. For example, table_[001-100];table_102.

      2. Click Exact Match. In the Confirm Matching Details dialog box, view the list of matched tables.

      3. Click OK.

    Shard Key (optional)

    The system shards data based on the configured shard key field. You can use this parameter with the concurrent reading configuration to implement concurrent reading. You can use a column in the source data table as the shard key. We recommend that you use the primary key or a column with an index as the shard key to ensure transmission performance.

    Important

    When you select a date and time type, the system identifies the maximum and minimum values, and performs forced sharding based on the total time range and concurrency. Even distribution is not guaranteed.

    Batch Read Count (optional)

    The number of records to read at a time. When reading data from the source database, you can configure a specific batch read count (such as 1,024 records) instead of reading records one by one. This reduces the number of interactions with the data source, improves I/O efficiency, and reduces network latency.

    Input Filter (optional)

    Enter filter information for input fields, for example, ds=${bizdate}. Input Filter is applicable to the following two scenarios:

    • Filtering a fixed portion of data.

    • Parameter filtering.

    Output Fields

    The output fields area displays all fields from the selected table and those that match the filter conditions. You can perform the following operations:

    • Field Management: If you do not need to output certain fields to downstream components, you can delete these fields:

      • Delete a single field: If you need to delete a small number of fields, you can click the sgaga icon in the Operation column to delete the unnecessary fields.

      • Delete multiple fields in batch: If you need to delete many fields, you can click Field Management. In the Field Management dialog box, select multiple fields, click the image left arrow icon to move the selected input fields to the unselected input fields, and then click OK to complete the batch deletion of fields.

        image..png

    • Batch Add: Click Batch Add to configure fields in JSON format, TEXT format, or DDL format in batch.

      Note

      After you complete the batch addition and click OK, the system will overwrite the configured field information.

      • Configure fields in JSON format, for example:

        // Example:
          [{
             "index": 0,
             "name": "id",
             "type": "int(10)",
             "mapType": "Long",
             "comment": "comment1"
           },
           {
             "index": 1,
             "name": "user_name",
             "type": "varchar(255)",
             "mapType": "String",
             "comment": "comment2"
         }]
        Note

        index indicates the column number of the specified object, name indicates the field name after import, and type indicates the field type after import. For example, "index":3,"name":"user_id","type":"String" indicates that the fourth column in the file is imported with the field name user_id and the field type String.

      • Configure fields in TEXT format, for example:

        // Example:
        0,id,int(10),Long,comment1
        1,user_name,varchar(255),Long,comment2
        • The row delimiter is used to separate the information of each field. The default is a line feed (\n). Supported delimiters include line feed (\n), semicolon (;), and period (.).

        • The column delimiter is used to separate the field name and field type. The default is a comma (,). Supported delimiters include ','. The field type can be omitted. The default is ','.

      • Configure fields in DDL format, for example:

        CREATE TABLE tablename (
        	user_id serial,
        	username VARCHAR(50),
        	password VARCHAR(50),
        	email VARCHAR (255),
        	created_on TIMESTAMP,
        );
    • Create a new output field: Click + Create Output Field. Follow the page prompts to fill in Column, Type, and Comment, and select Mapping Type. After you complete the configuration for the current row, click the image icon to save it.

  8. Click OK to complete the property configuration of the Amazon RDS for PostgreSQL input component.