All Products
Search
Document Center

Dataphin:Configure Amazon RDS for PostgreSQL output component

Last Updated:Jul 07, 2025

The Amazon RDS for PostgreSQL output component writes data to an Amazon RDS for PostgreSQL data source. When synchronizing data from other data sources to an Amazon RDS for PostgreSQL data source, you need to configure the target data source for the Amazon RDS for PostgreSQL output component after configuring the source data information. This topic describes how to configure the Amazon RDS for PostgreSQL output component.

Prerequisites

Procedure

  1. In the top navigation bar of the Dataphin homepage, choose Develop > Data Integration.

  2. In the top navigation bar of the integration page, select Project (In Dev-Prod mode, you need to select an environment).

  3. In the left-side navigation pane, click Batch Pipeline, and then click the offline pipeline that you want to develop in the Batch Pipeline list to open the configuration page of the offline pipeline.

  4. Click Component Library in the upper-right corner of the page to open the Component Library panel.

  5. In the left-side navigation pane of the Component Library panel, select Outputs. Find the Amazon RDS for PostgreSQL component in the output component list on the right, and drag it to the canvas.

  6. Click and drag the image icon of the target input component to connect it to the current Amazon RDS for PostgreSQL output component.

  7. Click the image icon in the Amazon RDS for PostgreSQL output component card to open the Amazon RDS For PostgreSQL Output Configuration dialog box.

    image

  8. In the Amazon RDS For PostgreSQL Output Configuration dialog box, configure the parameters.

    Parameter

    Description

    Basic Settings

    Step Name

    The name of the Amazon RDS for PostgreSQL output component. Dataphin automatically generates a step name, which you can modify based on your business scenario. The ID must meet the following requirements:

    • It can contain only Chinese characters, letters, underscores (_), and digits.

    • It can be up to 64 characters in length.

    Datasource

    The data source dropdown list displays all Amazon RDS for PostgreSQL data sources, including those for which you have write-through permissions and those for which you do not. Click the image icon to copy the current data source name.

    Schema (optional)

    You can select tables across schemas. Select the schema where the table is located. If you do not specify a schema, the schema configured in the data source is used by default.

    Table

    Select the target table for output data. You can enter a keyword to search for tables or enter the exact table name and click Exact Match. After you select a table, the system automatically checks the table status. Click the image icon to copy the name of the selected table.

    If the target table for data synchronization does not exist in the Amazon RDS for PostgreSQL data source, you can use the one-click table creation feature to quickly generate the target table. Perform the following steps:

    1. Click One-click Table Creation. Dataphin automatically matches the code for creating the target table, including the target table name (default is the source table name), field types (initially converted based on Dataphin fields), and other information.

    2. You can modify the SQL script for creating the target table based on your business requirements, and then click Create.

      After the target table is created, Dataphin automatically sets the newly created table as the target table for output data. One-click table creation is used to create target tables for data synchronization in development and production environments. Dataphin selects the production environment for table creation by default. If a table with the same name and structure already exists in the production environment, you do not need to select table creation for the production environment.

      Note
      • If a table with the same name exists in the development or production environment, Dataphin will report an error when you click Create.

      • If there are no matching items, you can also integrate based on a manually entered table name.

    Loading Policy

    Select the strategy for writing data to the target table. Loading Policy includes:

    • Append Data (insert Into): When a primary key/constraint conflict occurs, a dirty data error will be reported.

    • Update On Primary Key Conflict (on Conflict Do Update Set): When a primary key/constraint conflict occurs, the data of the mapped fields will be updated on the existing record.

    Synchronous Write

    Primary key update syntax is not an atomic operation. If the data to be written has duplicate primary keys, you need to enable synchronous write. Otherwise, parallel write is used. Synchronous write performance is lower than parallel write.

    Note

    This option is only available when the loading policy is set to Update on primary key conflict.

    Batch Write Data Volume (optional)

    The size of data to be written at one time. You can also set Batch Write Count. The system will write data when either limit is reached. The default is 32M.

    Batch Write Count (optional)

    The default is 2048 records. When data is synchronized and written, a batch write strategy is used, with parameters including Batch Write Count and Batch Write Data Volume.

    • When the accumulated data reaches either limit (i.e., the batch write data volume or count limit), the system considers a batch of data to be full and immediately writes this batch of data to the target at once.

    • It is recommended to set the batch write data volume to 32MB. For the batch insert count limit, you can adjust it flexibly based on the actual size of a single record, usually setting it to a larger value to fully utilize the advantages of batch writing. For example, if the size of a single record is about 1KB, you can set the batch insert byte size to 16MB, and considering this condition, set the batch insert count to greater than the result of 16MB divided by the single record size of 1KB (i.e., greater than 16384 records), assuming here it is set to 20000 records. With this configuration, the system will trigger batch writes based on the batch insert byte size, executing a write operation whenever the accumulated data reaches 16MB.

    Prepare Statement (optional)

    The SQL script to be executed on the database before data import.

    For example, to ensure continuous service availability, before the current step writes data, it first creates a target table Target_A, then executes writing to Target_A. After the current step completes writing data, it renames the continuously serving table Service_B to Temp_C, then renames table Target_A to Service_B, and finally deletes Temp_C.

    End Statement (optional)

    The SQL script to be executed on the database after data import.

    Field Mapping

    Input Fields

    Displays the input fields based on the upstream output.

    Output Fields

    Displays the output fields. You can perform the following operations:

    • Field management: Click Field Management to select output fields.

      image

      • Click the gaagag icon to move Selected Input Fields to Unselected Input Fields.

      • Click the agfag icon to move Unselected Input Fields to Selected Input Fields.

    • Batch add: Click Batch Add to configure in JSON, TEXT, or DDL format.

      • Batch configuration in JSON format, for example:

        // Example:
        [{
          "name": "user_id",
          "type": "String"
         },
         {
          "name": "user_name",
          "type": "String"
         }]
        Note

        name represents the imported field name, and type represents the field type after import. For example, "name":"user_id","type":"String" means importing a field named user_id and setting its field type to String.

      • Batch configuration in TEXT format, for example:

        // Example:
        user_id,String
        user_name,String
        • The row delimiter is used to separate the information of each field, with the default being a line feed (\n). Supported delimiters include line feed (\n), semicolon (;), and period (.).

        • The column delimiter is used to separate field names from field types, with the default being a comma (,).

      • Batch configuration in DDL format, for example:

        CREATE TABLE tablename (
            id INT PRIMARY KEY,
            name VARCHAR(50),
            age INT
        );
    • Create new output field: Click +Create New Output Field, fill in the Column and select the Type as prompted. After completing the configuration for the current row, click the image icon to save.

    Mapping

    Based on the upstream input and the fields of the target table, you can manually select field mappings. Quick Mapping includes Same Row Mapping and Same Name Mapping.

    • Same name mapping: Maps fields with the same name.

    • Same row mapping: Maps fields in the same row when the field names in the source and target tables are different but the data in the corresponding rows needs to be mapped.

  9. Click OK to complete the property configuration of the Amazon RDS for PostgreSQL output component.