All Products
Search
Document Center

Dataphin:Configure Microsoft SQL Server output component

Last Updated:Jul 07, 2025

The Microsoft SQL Server output component writes data to a Microsoft SQL Server data source. In scenarios where data from other data sources is synchronized to a Microsoft SQL Server data source, you need to configure the target data source for the Microsoft SQL Server output component after configuring the source data source information. This topic describes how to configure a Microsoft SQL Server output component.

Prerequisites

Procedure

  1. In the top navigation bar of the Dataphin homepage, choose Develop > Data Integration.

  2. In the top navigation bar of the integration page, select Project (In Dev-Prod mode, you need to select an environment).

  3. In the left-side navigation pane, click Batch Pipeline, and then click the offline pipeline that you want to develop in the Batch Pipeline list to open the configuration page of the offline pipeline.

  4. Click Component Library in the upper-right corner of the page to open the Component Library panel.

  5. In the left-side navigation pane of the Component Library panel, select Outputs, find the Microsoft SQL Server component in the output component list on the right, and drag the component to the canvas.

  6. Click and drag the image icon of the target input component to connect it to the current Microsoft SQL Server output component.

  7. Click the image icon in the Microsoft SQL Server output component card to open the Microsoft SQL Server Output Configuration dialog box.image

  8. In the Microsoft SQL Server Output Configuration dialog box, configure the parameters.

    Parameter

    Description

    Basic Settings

    Step Name

    The name of the Microsoft SQL Server output component. Dataphin automatically generates a step name, which you can modify based on your business scenario. The name must meet the following requirements:

    • It can contain only Chinese characters, letters, underscores (_), and digits.

    • It can be up to 64 characters in length.

    Datasource

    The data source dropdown list displays all Microsoft SQL Server data sources, including those for which you have write-through permission and those for which you do not. Click the image icon to copy the current data source name.

    Time Zone

    Time format data will be processed according to the current time zone. The default is the time zone configured in the selected data source and cannot be modified.

    Note

    For tasks created before V5.1.2, you can select Data Source Default Configuration or Channel Configuration Time Zone. The default selection is Channel Configuration Time Zone.

    • Data Source Default Configuration: The default time zone of the selected data source.

    • Channel Configuration Time Zone: The time zone configured in Properties > Channel Configuration for the current integration task.

    Schema (Optional)

    Supports cross-schema table selection. Select the schema where the table is located. If not specified, the default is the schema configured in the data source.

    Table

    Select the target table for output data. You can enter a keyword to search for tables or enter the exact table name and click Exact Match. After selecting a table, the system automatically performs a table status check. Click the image icon to copy the name of the currently selected table.

    Loading Policy

    Select the policy for writing data to the target table. Loading Policy includes:

    • Append Data (insert Into): When there is a primary key/constraint conflict, a dirty data error will be displayed.

    • Update On Primary Key Conflict (merge Into): When there is a primary key/constraint conflict, the data of the mapped fields will be updated on the existing record.

    Synchronous Write

    Primary key update syntax is not an atomic operation. If the data to be written has duplicate primary keys, you need to enable synchronous write. Otherwise, parallel write is used. Synchronous write performance is lower than parallel write.

    Note

    This option is only available when the loading policy is set to Update on Primary Key Conflict.

    Batch Write Data Size

    The size of data to be written at once. You can also set Batch Write Records. When writing, the system will write according to whichever limit is reached first. The default is 32M.

    Batch Write Records

    The default is 2048 records. When synchronizing data, a batch write strategy is used, with parameters including Batch Write Records and Batch Write Data Size.

    • When the accumulated data reaches either of the set limits (i.e., the batch write data size or record count limit), the system considers a batch of data to be full and immediately writes this batch of data to the target at once.

    • It is recommended to set the batch write data size to 32MB. For the batch insert record limit, you can adjust it flexibly according to the actual size of a single record, usually setting it to a larger value to fully utilize the advantages of batch writing. For example, if the size of a single record is about 1KB, you can set the batch insert byte size to 16MB, and considering this condition, set the batch insert record count to greater than the result of 16MB divided by the single record size of 1KB (i.e., greater than 16384 records), assuming here it is set to 20000 records. With this configuration, the system will trigger batch write operations based on the batch insert byte size, executing a write operation whenever the accumulated data reaches 16MB.

    Prepare Statement

    Optional. SQL script to be executed on the database before data import.

    For example, to ensure continuous service availability, before the current step writes data, it first creates a target table Target_A, executes writing to Target_A, and after the current step completes writing data, it renames the continuously serving table Service_B to Temp_C, then renames table Target_A to Service_B, and finally deletes Temp_C.

    End Statement

    Optional. SQL script to be executed on the database after data import.

    Field Mapping

    Input Fields

    Displays the input fields based on the output of the upstream component.

    Output Fields

    Displays the output fields. You can perform the following operations:

    • Field Management: Click Field Management to select output fields.

      image

      • Click the gaagag icon to move Selected Input Fields to Unselected Input Fields.

      • Click the agfag icon to move Unselected Input Fields to Selected Input Fields.

    • Batch Add: Click Batch Add to support batch configuration in JSON, TEXT, and DDL formats.

      • Batch configuration in JSON format, for example:

        // Example:
        [{"name":"id","type":"String"},
        {"name":"aaasa","type":"String"},
        {"name":"creator","type":"String"},
        {"name":"modifier","type":"String"},
        {"name":"creator_nickname","type":"String"},
        {"name":"modifier_nickname","type":"String"},
        {"name":"create_time","type":"Date"},
        {"name":"modify_time","type":"Date"},
        {"name":"qbi_system_upload_id","type":"Long"}]
        Note

        name represents the imported field name, and type represents the field type after import. For example, "name":"user_id","type":"String" means importing a field named user_id and setting its field type to String.

      • Batch configuration in TEXT format, for example:

        // Example:
        id,String
        aaasa,String
        creator,String
        modifier,String
        creator_nickname,String
        modifier_nickname,String
        create_time,Date
        modify_time,Date
        qbi_system_upload_id,Long
        • The row delimiter is used to separate the information of each field. The default is a line feed (\n), and it supports line feed (\n), semicolon (;), and period (.).

        • The column delimiter is used to separate the field name and field type. The default is a comma (,).

      • Batch configuration in DDL format, for example:

        CREATE TABLE tablename (
            id INT PRIMARY KEY,
            name VARCHAR(50),
            age INT
        );
    • Create New Output Field: Click +Create New Output Field, fill in Column and select Type as prompted. After completing the configuration for the current row, click the image icon to save.

    Quick Mapping

    Based on the upstream input and the target table fields, you can manually select field mappings. Quick Mapping includes Same Row Mapping and Same Name Mapping.

    • Same Name Mapping: Maps fields with the same name.

    • Same Row Mapping: The field names in the source table and target table are inconsistent, but the data in the corresponding rows of the fields needs to be mapped. Only maps fields in the same row.

  9. Click OK to complete the property configuration of the Microsoft SQL Server output component.