The Oracle output component writes data to an Oracle data source. In scenarios where data from other data sources is synchronized to an Oracle data source, after configuring the source data information, you need to configure the target data source for the Oracle output component. This topic describes how to configure an Oracle output component.
Prerequisites
An Oracle data source is added. For more information, see Create an Oracle data source.
The account used to configure the Oracle output component properties must have write permission for the data source. If you do not have the permission, you need to apply for the data source permission. For more information, see Apply for, renew, and return data source permissions.
Procedure
In the top navigation bar of the Dataphin homepage, choose Develop > Data Integration.
In the top navigation bar of the Integration page, select Project (In Dev-Prod mode, you need to select Environment).
In the left navigation pane, click Batch Pipeline, and then click the offline pipeline that you want to develop in the Batch Pipeline list to open its configuration page.
Click Component Library in the upper-right corner of the page to open the Component Library panel.
In the left navigation pane of the Component Library panel, select Outputs, find the Oracle component in the output component list on the right, and drag it to the canvas.
Click and drag the
icon of the target input, transform, or flow component to connect it to the current Oracle output component.Click the
icon on the Oracle output component to open the Oracle Output Configuration dialog box.
In the Oracle Output Configuration dialog box, configure the parameters.
Parameter
Description
Basic Settings
Step Name
The name of the Oracle output component. Dataphin automatically generates a step name, which you can modify based on your business scenario. The name must meet the following requirements:
It can contain only Chinese characters, letters, underscores (_), and digits.
It can be up to 64 characters in length.
Datasource
The data source dropdown list displays all Oracle data sources, including those for which you have write permission and those for which you do not. Click the
icon to copy the current data source name.For data sources for which you do not have write permission, you can click Apply next to the data source to apply for write permission. For more information, see Apply for data source permissions.
If you do not have an Oracle data source, click Create Data Source to create one. For more information, see Create an Oracle data source.
Time Zone
Time format data will be processed according to the current time zone. The default is the time zone configured in the selected data source and cannot be modified.
NoteFor tasks created before V5.1.2, you can select Data Source Default Configuration or Channel Configuration Time Zone. The default is Channel Configuration Time Zone.
Data Source Default Configuration: The default time zone of the selected data source.
Channel Configuration Time Zone: The time zone configured in Properties > Channel Configuration for the current integration task.
Schema (Optional)
Supports cross-schema table selection. Select the schema where the table is located. If not specified, the default is the schema configured in the data source.
Table
Select the target table for output data. You can search by entering table name keywords or enter the exact table name and click Exact Match. After selecting a table, the system automatically checks the table status. Click the
icon to copy the name of the currently selected table.If there is no target table for data synchronization in the Oracle data source, you can use the one-click table creation feature to quickly generate a target table. The detailed steps are as follows:
Click One-Click Table Creation. Dataphin automatically matches the code to create the target table, including the target table name (default is the source table name), field types (initially converted based on Dataphin fields), and other information.
You can modify the SQL script for creating the target table according to your business needs, and then click Create.
ImportantIf you do not change the table name after
CREATE TABLE, the system generates an uppercase table name by default.If you enclose the table name in double quotation marks ("") after
CREATE TABLE, a case-sensitive table name is generated.
After the target table is successfully created, Dataphin automatically sets the newly created table as the target table for output data. One-click table creation is used to create target tables for data synchronization in development and production environments. Dataphin selects the production environment table creation by default. If there is already a table with the same name and structure in the production environment, you do not need to select production environment table creation.
NoteIf a table with the same name exists in the development or production environment, Dataphin will report an error that the table already exists when you click Create.
Loading Policy
Select the policy for writing data to the target table.
Append Data (insert Into): When a primary key/constraint violation occurs, a dirty data error is reported.
Update On Primary Key Conflict (merge Into): When a primary key/constraint violation occurs, the data in the mapped fields is updated on the existing record.
Synchronous Write
Oracle database primary key update syntax is not an atomic operation. If the data to be written has duplicate primary keys, you need to enable synchronous write. Otherwise, parallel write is used. Synchronous write performance is lower than parallel write.
NoteThis option is available only when the loading policy is set to Update on Primary Key Conflict.
Batch Write Data Size (Optional)
The size of data to be written at one time. You can also set Batch Write Records. The system will write data when either limit is reached first. The default is 32M.
Batch Write Records (Optional)
The default is 2048 records. When data is synchronized and written, a batch write strategy is used, with parameters including Batch Write Records and Batch Write Data Size.
When the accumulated data reaches either of the set limits (i.e., the batch write data size or record count limit), the system considers a batch of data to be full and immediately writes this batch of data to the target at once.
It is recommended to set the batch write data size to 32MB. For the batch insert record limit, you can adjust it flexibly according to the actual size of a single record, usually setting it to a larger value to fully utilize the advantages of batch writing. For example, if the size of a single record is about 1KB, you can set the batch insert byte size to 16MB, and considering this condition, set the batch insert record count to greater than the result of 16MB divided by the single record size of 1KB (i.e., greater than 16384 records), assuming here it is set to 20000 records. With this configuration, the system will trigger batch write operations based on the batch insert byte size, executing a write operation whenever the accumulated data reaches 16MB.
Prepare Statement (Optional)
The SQL script to be executed on the database before data import.
For example, to ensure continuous service availability, before the current step writes data, it first creates a target table Target_A, executes writing to the target table Target_A, and after the current step completes writing data, it renames the continuously serving table Service_B to Temp_C, then renames table Target_A to Service_B, and finally deletes Temp_C.
End Statement (Optional)
The SQL script to be executed on the database after data import.
Advanced Configuration
Login Timeout
When the login time exceeds the set duration, the system automatically disconnects and reports an error. The unit is seconds (s), with a default of 600s.
Query Timeout
When the query time exceeds the set duration, the system automatically disconnects and reports an error. The unit is seconds (s), with a default of 1800s.
Field Mapping
Input Fields
Displays the input fields based on the output of the upstream component.
Output Fields
Displays the output fields. You can perform the following operations:
Field Management: Click Field Management to select output fields.

Click the
icon to move fields from Selected Input Fields to Unselected Input Fields.Click the
icon to move fields from Unselected Input Fields to Selected Input Fields.
Batch Add: Click Batch Add to configure in JSON, TEXT, or DDL format.
Batch configuration in JSON format, for example:
// Example: [{ "name": "user_id", "type": "String" }, { "name": "user_name", "type": "String" }]Note`name` represents the name of the imported field, and `type` represents the field's data type. For example,
"name":"user_id","type":"String"means importing the field named `user_id` and setting its data type to `String`.Batch configuration in TEXT format, for example:
// Example: user_id,String user_name,StringThe row delimiter is used to separate the information of each field, with the default being a line feed (\n). It supports line feed (\n), semicolon (;), and period (.).
The column delimiter is used to separate the field name and field type, with the default being a comma (,).
Batch configuration in DDL format, for example:
CREATE TABLE tablename ( id INT PRIMARY KEY, name VARCHAR(50), age INT );
Create New Output Field: Click +Create New Output Field, fill in the Column and select the Type as prompted. After completing the configuration for the current row, click the
icon to save.
Mapping
Based on the upstream input and the target table fields, you can manually select field mappings. Quick Mapping includes Same Row Mapping and Same Name Mapping.
Same Name Mapping: Maps fields with the same name.
Same Row Mapping: When the field names in the source and target tables are inconsistent, but the data in the corresponding rows needs to be mapped. Only maps fields in the same row.
Click OK to complete the Oracle Output Component configuration.