All Products
Search
Document Center

Dataphin:Configure Teradata output component

Last Updated:May 28, 2025

The Teradata output component writes data to a Teradata data source. When synchronizing data from other data sources to a Teradata data source, you must configure the target data source for the Teradata output component after configuring the source data information. This topic describes how to configure the Teradata output component.

Prerequisites

  • You have created a Teradata data source. For more information, see Create a Teradata data source.

  • The account used to configure the Teradata output component properties has write-through permission for the data source. If you do not have this permission, you need to request it. For more information, see Request data source permissions.

Limits

Writing data to views in a Teradata database is not supported.

Procedure

  1. In the top navigation bar of the Dataphin homepage, choose Develop > Data Integration.

  2. In the top navigation bar of the integration page, select Project (In Dev-Prod mode, you need to select Environment).

  3. In the left-side navigation pane, click Batch Pipeline. In the Batch Pipeline list, click the offline pipeline that you want to develop to open its configuration page.

  4. Click Component Library in the upper-right corner of the page to open the Component Library panel.

  5. In the left-side navigation pane of the Component Library panel, select Outputs. Find the Teradata component in the output component list on the right, and drag it to the canvas.

  6. Click and drag the image icon of the target input, transform, or flow component to connect it to the current Teradata output component.

  7. Click the image icon in the Teradata output component card to open the Teradata Output Configuration dialog box.image

  8. In the Teradata Output Configuration dialog box, configure the parameters.

    Parameter

    Description

    Basic Settings

    Step Name

    The name of the Teradata output component. Dataphin automatically generates a step name, which you can modify based on your business scenario. The name must meet the following requirements:

    • It can contain only Chinese characters, letters, underscores (_), and digits.

    • It cannot exceed 64 characters in length.

    Datasource

    The data source dropdown list displays all Teradata data sources, including those for which you have write-through permission and those for which you do not. Click the image icon to copy the current data source name.

    • For data sources for which you do not have write-through permission, you can click Request next to the data source to request write-through permission. For more information, see Request data source permissions.

    • If you do not have a Teradata data source, click Create Data Source to create one. For more information, see Create a Teradata data source.

    Table

    Select the target table for output data. You can enter a keyword to search for tables or enter the exact table name and click Exact Match. After you select a table, the system automatically checks the table status. Click the image icon to copy the name of the selected table.

    Loading Policy

    Select the loading policy for fields. The system supports the Append Data policy but does not support the overwrite policy. Under the Append Data policy, a dirty data error is reported when a primary key or constraint violation occurs.

    Batch Write Size (optional)

    The size of data to be written at a time. You can also set Batch Write Records. The system writes data when either limit is reached. The default value is 32M.

    Batch Write Records (optional)

    The default value is 2048 records. When data is synchronized and written, the batch write strategy is used. The parameters include Batch Write Records and Batch Write Size.

    • When the accumulated data reaches either limit (the batch write size or batch write records), the system considers a batch of data to be full and immediately writes this batch of data to the destination at once.

    • We recommend that you set the batch write size to 32MB. For the batch write records limit, you can adjust it based on the actual size of a single record, typically setting it to a large value to fully utilize the advantages of batch writing. For example, if the size of a single record is about 1KB, you can set the batch write size to 16MB. Considering this condition, set the batch write records to a value greater than the result of 16MB divided by the size of a single record (1KB), which is greater than 16384 records. Assume that you set it to 20000 records. With this configuration, the system triggers batch writing based on the batch write size. When the accumulated data reaches 16MB, a write operation is performed.

    Prepare Statement (optional)

    The SQL script to be executed on the database before data import.

    For example, to ensure continuous service availability, before the current step writes data, it first creates a target table Target_A, then writes data to Target_A. After the current step completes writing data, it renames the Service_B table (which continuously provides services in the database) to Temp_C, then renames the Target_A table to Service_B, and finally deletes Temp_C.

    Post Statement (optional)

    The SQL script to be executed on the database after data import.

    Field Mapping

    Input Fields

    Displays the input fields based on the output of the upstream component.

    Output Fields

    Displays the output fields. You can perform the following operations:

    • Field Management: Click Field Management to select output fields.

      image

      • Click the gaagag icon to move Selected Output Fields to Unselected Output Fields.

      • Click the agfag icon to move Unselected Output Fields to Selected Output Fields.

    • Batch Add: Click Batch Add to configure in JSON, TEXT, or DDL format.

      • Batch configuration in JSON format, for example:

        // Example:
        [{
          "name": "user_id",
          "type": "String"
         },
         {
          "name": "user_name",
          "type": "String"
         }]
        Note

        name indicates the name of the imported field, and type indicates the field type after import. For example, "name":"user_id","type":"String" indicates that the field named user_id is imported and its field type is set to String.

      • Batch configuration in TEXT format, for example:

        // Example:
        user_id,String
        user_name,String
        • The row delimiter is used to separate the information of each field. The default is a line feed (\n). Line feed (\n), semicolon (;), and period (.) are supported.

        • The column delimiter is used to separate the field name and field type. The default is a comma (,).

      • Batch configuration in DDL format, for example:

        CREATE TABLE tablename (
            id INT PRIMARY KEY,
            name VARCHAR(50),
            age INT
        );
    • Create Output Field: Click +Create Output Field, fill in the Column and select the Type as prompted. After completing the configuration for the current row, click the image icon to save.

    Mapping

    Based on the upstream input and the fields of the target table, you can manually select field mappings. Mapping includes Same Row Mapping and Same Name Mapping.

    • Same Name Mapping: Maps fields with the same name.

    • Same Row Mapping: Maps fields in the same row when the field names in the source and target tables are different but the data in the corresponding rows needs to be mapped.

  9. Click OK to complete the property configuration of the Teradata output component.