All Products
Search
Document Center

Dataphin:Configure AnalyticDB for PostgreSQL output component

Last Updated:Feb 12, 2026

The AnalyticDB for PostgreSQL output component writes data to an AnalyticDB for PostgreSQL data source. In scenarios where data from other data sources is synchronized to an AnalyticDB for PostgreSQL data source, after configuring the source data information, you need to configure the destination data source for the AnalyticDB for PostgreSQL output component. This topic describes how to configure the AnalyticDB for PostgreSQL output component.

Prerequisites

  • An AnalyticDB for PostgreSQL data source is created. For more information, see Create an AnalyticDB for PostgreSQL data source.

  • The account used to configure the AnalyticDB for PostgreSQL input component properties must have write-through permission for the data source. If you do not have the permission, you need to request the data source permission. For more information, see Request data source permissions.

Procedure

  1. In the top navigation bar of the Dataphin homepage, choose Develop > Data Integration.

  2. In the top navigation bar of the Integration page, select a project (In Dev-Prod mode, you need to select an environment).

  3. In the navigation pane on the left, click Batch Pipeline. In the Batch Pipeline list, click the offline pipeline that you want to develop to open its configuration page.

  4. Click Component Library in the upper-right corner of the page to open the Component Library panel.

  5. In the navigation pane on the left of the Component Library panel, select Outputs. Find the AnalyticDB for PostgreSQL component in the output component list on the right, and drag it to the canvas.

  6. Click and drag the image icon of the target input, transform, or flow component to connect it to the current AnalyticDB for PostgreSQL output component.

  7. Click the image icon on the AnalyticDB for PostgreSQL output component to open the AnalyticDB for PostgreSQL Output Configuration dialog box.image

  8. In the AnalyticDB For PostgreSQL Output Configuration dialog box, configure the following parameters.

    Parameter

    Description

    Basic Information

    Step Name

    The name of the AnalyticDB for PostgreSQL output component. Dataphin automatically generates a step name, which you can modify based on your business scenario. The name must meet the following requirements:

    • It can contain only Chinese characters, letters, underscores (_), and digits.

    • It can be up to 64 characters in length.

    Datasource

    The data source dropdown list displays all AnalyticDB for PostgreSQL data sources, including those for which you have write-through permissions and those for which you do not. Click the image icon to copy the current data source name.

    Time Zone

    Time format data will be processed according to the current time zone. The default is the time zone configured in the selected data source and cannot be modified.

    Note

    For tasks created before V5.1.2, you can select Data Source Default Configuration or Channel Configuration Time Zone. The default is Channel Configuration Time Zone.

    • Data Source Default Configuration: The default time zone of the selected data source.

    • Channel Configuration Time Zone: The time zone configured in Properties > Channel Configuration of the current integration task.

    Schema (optional)

    Supports selecting tables across schemas. Select the schema where the table is located. If not specified, the default is the schema configured in the data source.

    Table

    Select the destination table for output data. You can enter a table name keyword to search or enter an exact table name and click Exact Match. After selecting a table, the system automatically checks the table status. Click the image icon to copy the name of the currently selected table.

    If there is no target table for data synchronization in the AnalyticDB for PostgreSQL data source, you can quickly generate a target table using the one-click table creation feature. The detailed steps are as follows:

    1. Click Create Table With One Click. Dataphin automatically matches the code to create the target table, including the target table name (default is the source table name), field types (initially converted based on Dataphin fields), and other information.

    2. You can modify the SQL script for creating the target table according to your business needs, and then click Create.

      After the target table is successfully created, Dataphin automatically sets the newly created table as the destination table for output data. One-click table creation is used to create target tables for data synchronization in development and production environments. Dataphin selects the production environment for table creation by default. If a table with the same name and structure already exists in the production environment, you do not need to select table creation for the production environment.

      Note

      If a table with the same name exists in the development or production environment, Dataphin will report an error when you click Create.

    Loading Policy

    Supports insert and copy policies.

    • insert policy: Executes the AnalyticDB for PostgreSQL insert into...values... statement to write data to AnalyticDB for PostgreSQL. If a primary key or unique index conflict occurs, the data row that is being synchronized fails to be written to AnalyticDB for PostgreSQL, and the current record becomes dirty data. The insert mode is the recommended option.

    • copy policy: AnalyticDB for PostgreSQL provides the copy command to copy data between tables and files, such as standard output and standard input. Data Integration supports the use of copy from to load data into tables. If a conflict occurs, an action is performed based on the conflict resolution policy. Use this policy only if you encounter performance issues. You must also configure the Conflict Resolution Policy. The options are Error on Conflict and Overwrite on Conflict.

      Important

      The conflict resolution policy is effective only in Copy mode when the AnalyticDB for PostgreSQL kernel version is higher than 4.3. If the kernel version is lower than 4.3 or unknown, please select carefully to avoid task failure.

    Batch Write Data Size (optional)

    The size of data to be written at once. You can also set Batch Write Count. The system will write data when either of the two configured limits is reached. The default is 32M.

    Batch Write Count (optional)

    The default is 2048 records. When synchronizing data, a batch write strategy is used, with parameters including Batch Write Count and Batch Write Data Size.

    • When the accumulated data reaches either of the set limits (i.e., reaches the batch write data size or count limit), the system considers a batch of data to be full and immediately writes this batch of data to the destination at once.

    • We recommend setting the batch write data size to 32MB. For the batch insert count limit, you can adjust it flexibly according to the actual size of a single record, usually setting it to a large value to fully utilize the advantages of batch writing. For example, if the size of a single record is about 1KB, you can set the batch insert byte size to 16MB, and considering this condition, set the batch insert count to greater than the result of 16MB divided by the single record size of 1KB (i.e., greater than 16384 records), assuming here it is set to 20000 records. With this configuration, the system will trigger batch writes based on the batch insert byte size, executing a write operation whenever the accumulated data reaches 16MB.

    Preparation Statement (optional)

    SQL script to be executed on the database before data import.

    For example, to ensure continuous service availability, before the current step writes data, it first creates a target table Target_A, executes writing to the target table Target_A, and after the current step completes writing data, it renames the continuously serving table Service_B to Temp_C, then renames table Target_A to Service_B, and finally deletes Temp_C.

    Completion Statement (optional)

    SQL script to be executed on the database after data import.

    Field Mapping

    Input Fields

    Displays the input fields based on the output of the upstream component.

    Output Fields

    Displays the output fields. You can perform the following operations:

    • Field management: Click Field Management to select output fields.

      image

      • Click the gaagag icon to move Selected Input Fields to Unselected Input Fields.

      • Click the agfag icon to move Unselected Input Fields to Selected Input Fields.

    • Batch add: Click Batch Add to configure in JSON, TEXT, or DDL format.

      • Batch configuration in JSON format, for example:

        // Example:
        [{
          "name": "user_id",
          "type": "String"
         },
         {
          "name": "user_name",
          "type": "String"
         }]
        Note

        `name` specifies the name of the field to import, and `type` specifies the data type of the field after it is imported. For example, "name":"user_id","type":"String" imports the field named `user_id` and sets its data type to String.

      • Batch configuration in TEXT format, for example:

        // Example:
        user_id,String
        user_name,String
        • The row delimiter is used to separate the information of each field. The default is a line feed (\n), and it can support line feed (\n), semicolon (;), or period (.).

        • The column delimiter is used to separate the field name and field type. The default is a comma (,).

      • Batch configuration in DDL format, for example:

        CREATE TABLE tablename (
            id INT PRIMARY KEY,
            name VARCHAR(50),
            age INT
        );
    • Create new output field: Click +Create New Output Field, fill in the Column and select the Type as prompted. After completing the configuration for the current row, click the image icon to save.

    Mapping

    Based on the upstream input and the fields of the target table, you can manually select field mappings. Mapping includes Same Row Mapping and Same Name Mapping.

    • Same Name Mapping: Maps fields with the same name.

    • Same Row Mapping: The field names in the source and target tables are inconsistent, but the data in the corresponding rows of the fields needs to be mapped. Only maps fields in the same row.

  9. Click OK to complete the configuration of the AnalyticDB for PostgreSQL output component.