By configuring the Doris output component, you can write data read from external databases into Doris, or copy and push data from storage systems connected to the big data platform to Doris for data integration and reprocessing. This topic describes how to configure the Doris output component.
Prerequisites
A Doris data source is added. For more information, see Create a Doris data source.
The account used to configure the Doris output component properties must have the write-through permission for the data source. If you do not have the permission, you need to request the data source permission. For more information, see Request data source permissions.
Procedure
In the top navigation bar of the Dataphin homepage, choose Development > Data Integration.
In the top navigation bar of the integration page, select Project (In Dev-Prod mode, you need to select an environment).
In the left navigation pane, click Batch Pipeline. In the Batch Pipeline list, click the offline pipeline that you want to develop to open its configuration page.
Click Component Library in the upper-right corner of the page to open the Component Library panel.
In the left navigation pane of the Component Library panel, select Outputs. Find the Doris component in the output component list on the right, and drag it to the canvas.
Click and drag the
icon of the target input, transformation, or flow component to connect it to the current Doris output component.Click the
icon on the Doris output component card to open the Doris Output Configuration dialog box.
In the Doris Output Configuration dialog box, configure the parameters according to the following table.
Parameter
Description
Basic Settings
Step Name
The name of the Doris output component. Dataphin automatically generates a step name, which you can modify based on your business scenario. The name must meet the following requirements:
It can contain only Chinese characters, letters, underscores (_), and digits.
The name can be up to 64 characters in length.
Datasource
The data source dropdown list displays all Doris type data sources, including those for which you have write-through permissions and those for which you do not. Click the
icon to copy the current data source name.For data sources for which you do not have write-through permissions, you can click Request after the data source to request write-through permissions. For more information, see Request data source permissions.
If you do not have a Doris type data source, click Create Data Source to create one. For more information, see Create a Doris data source.
Table
Select the destination table for output data. You can enter a keyword to search for tables, or enter the exact table name and click Exact Match. After selecting a table, the system automatically checks the table status. Click the
icon to copy the name of the currently selected table.If there is no target table for data synchronization in the Doris data source, you can use the one-click table creation feature to quickly generate a target table. To create a table with one click, perform the following steps:
Click Create Table With One Click. Dataphin automatically matches the code to create the target table, including the target table name (default is the source table name), field types (initially converted based on Dataphin fields), and other information.
You can modify the SQL script for creating the target table as needed, and then click Create.
After the target table is successfully created, Dataphin automatically sets the newly created table as the destination table for output data. The one-click table creation feature is used to create target tables for data synchronization in development and production environments. Dataphin selects the production environment table creation by default. If there is already a table with the same name and structure in the production environment, you do not need to select production environment table creation.
NoteIf a table with the same name exists in the development or production environment, Dataphin will report an error when you click Create.
When there are no matching items, you can still perform integration based on manually entered table names.
Data Format
You can select CSV or JSON.
If you select CSV, you also need to configure CSV Import Column Delimiter and CSV Import Row Delimiter.
CSV Import Column Delimiter (optional)
When you use StreamLoad to import CSV data, you can configure the column delimiter. The default delimiter is
_@dp@_. If you use the default value, do not specify it here. If your data contains_@dp@_, you must specify a custom delimiter.CSV Import Row Delimiter (optional)
When you use StreamLoad to import CSV data, you can configure the row delimiter. The default delimiter is
_#dp#_. If you use the default value, do not specify it here. If your data contains_#dp#_, you must specify a custom delimiter.Batch Write Data Size (optional)
The size of data to be written at once. You can also set Batch Write Count. The system will write data when either of these two limits is reached. The default is 32M.
Batch Write Count (optional)
The default is 2048 records. When synchronizing data, the system uses a batch write strategy with parameters including Batch Write Count and Batch Write Data Size.
When the accumulated data reaches either of the set limits (i.e., the batch write data size or count limit), the system considers a batch of data to be full and immediately writes this batch of data to the destination in one operation.
It is recommended to set the batch write data size to 32MB. For the batch insert count limit, you can adjust it flexibly based on the actual size of a single record to fully utilize the advantages of batch writing. For example, if a single record is approximately 1KB in size, you can set the batch insert byte size to 16MB, and considering this condition, set the batch insert count to greater than the result of 16MB divided by the single record size of 1KB (i.e., greater than 16384 records), for example, 20000 records. With this configuration, the system will trigger batch writes based on the batch insert byte size, executing a write operation whenever the accumulated data reaches 16MB.
Prepare Statement (optional)
The SQL script to be executed on the database before data import.
For example, to ensure continuous service availability, before the current step writes data, it first creates a target table Target_A, then executes writing to Target_A. After the current step completes writing data, it renames the continuously serving table Service_B to Temp_C, then renames table Target_A to Service_B, and finally deletes Temp_C.
End Statement (optional)
The SQL script to be executed on the database after data import.
Field Mapping
Input Fields
Displays the input fields based on the output of the upstream component.
Output Fields
Displays the output fields. You can perform the following operations:
Field Management: Click Field Management to select output fields.

Click the
icon to move Selected Input Fields to Unselected Input Fields.Click the
icon to move Unselected Input Fields to Selected Input Fields.
Batch Add: Click Batch Add to support JSON, TEXT format, and DDL format batch configuration.
Batch configuration in JSON format, for example:
// Example: [{ "name": "user_id", "type": "String" }, { "name": "user_name", "type": "String" }]Note`name` specifies the name of the imported field, and `type` specifies the data type of the field after it is imported. For example,
"name":"user_id","type":"String"imports the field named `user_id` and sets its data type to String.Batch configuration in TEXT format, for example:
// Example: user_id,String user_name,StringThe row delimiter is used to separate information for each field. The default is a line feed (\n), and it supports line feed (\n), semicolon (;), and period (.).
The column delimiter is used to separate field names from field types. The default is a comma (,).
Batch configuration in DDL format, for example:
CREATE TABLE tablename ( id INT PRIMARY KEY, name VARCHAR(50), age INT );
Create Output Field: Click +Create Output Field, fill in the Column and select the Type as prompted. After completing the configuration for the current row, click the
icon to save.
Mapping
Based on the upstream input and target table fields, you can manually select field mappings. Quick Mapping includes Same Row Mapping and Same Name Mapping.
Same Name Mapping: Maps fields with the same name.
Same Row Mapping: When the field names in the source and target tables are inconsistent, but the data in corresponding rows needs to be mapped. Only maps fields in the same row.
Click OK to complete the property configuration of the Doris Output Component.