The MySQL output component writes data to a MySQL data source. In scenarios where data from other data sources is synchronized to a MySQL data source, after configuring the source data source information, you need to configure the target data source for the MySQL output component. This topic describes how to configure a MySQL output component.
Prerequisites
A MySQL data source is added. For more information, see Create a MySQL data source.
The account used to configure the MySQL output component properties must have the write-through permission for the data source. If you do not have the permission, you need to request the data source permission. For more information, see Request, renew, and return data source permissions.
Procedure
In the top navigation bar of the Dataphin homepage, choose Develop > Data Integration.
In the top navigation bar of the integration page, select Project (In Dev-Prod mode, you need to select Environment).
In the left navigation bar, click Batch Pipeline. In the Batch Pipeline list, click the offline pipeline that you want to develop to open its configuration page.
Click Component Library in the upper-right corner of the page to open the Component Library panel.
In the left navigation bar of the Component Library panel, select Outputs. Find the MySQL component in the output component list on the right and drag it to the canvas.
Click and drag the
icon of the target input, transform, or flow component to connect it to the current MySQL output component.Click the
icon on the MySQL output component to open the MySQL Output Configuration dialog box.
In the MySQL Output Configuration dialog box, configure the parameters.
Parameter
Description
Basic Settings
Step Name
The name of the MySQL output component. Dataphin automatically generates a step name, which you can modify based on your business scenario. The name must meet the following requirements:
It can contain only Chinese characters, letters, underscores (_), and digits.
It cannot exceed 64 characters in length.
Datasource
The data source dropdown list displays all MySQL data sources, including those for which you have the write-through permission and those for which you do not. Click the
icon to copy the current data source name.For data sources for which you do not have the write-through permission, you can click Request next to the data source to request the write-through permission. For more information, see Request, renew, and return data source permissions.
If you do not have a MySQL data source, click Create Data Source to create one. For more information, see Create a MySQL data source.
Time Zone
The time zone used to process time format data. The default value is the time zone configured in the selected data source. This parameter cannot be modified.
NoteFor tasks created before V5.1.2, you can select Data Source Default Configuration or Channel Configuration Time Zone. The default value is Channel Configuration Time Zone.
Data Source Default Configuration: the default time zone of the selected data source.
Channel Configuration Time Zone: the time zone configured in Properties > Channel Configuration for the current integration task.
Database (optional)
Select the database where the table is located. If you do not specify a database, the database specified when the data source was registered is used.
Table
Select the target table for output data. You can enter a keyword to search for a table or enter the exact table name and click Exact Match. After you select a table, the system automatically checks the table status. Click the
icon to copy the name of the selected table.If the target table for data synchronization does not exist in the MySQL data source, you can use the one-click table creation feature to quickly generate the target table. To create a table with one click, perform the following steps:
Click Create Table With One Click. Dataphin automatically generates the code to create the target table, including the target table name (which defaults to the source table name), field types (initially converted based on Dataphin fields), and other information.
You can modify the SQL script for creating the target table as needed, and then click Create.
After the target table is created, Dataphin automatically sets it as the target table for output data. The one-click table creation feature is used to create target tables for data synchronization in the development and production environments. Dataphin selects the production environment for table creation by default. If a table with the same name and structure already exists in the production environment, you do not need to select the production environment for table creation.
NoteIf a table with the same name exists in the development or production environment, Dataphin reports an error indicating that the table already exists when you click Create.
If no matching table is found, you can also manually enter a table name for integration.
Loading Policy
Select a policy for writing data to the target table. Loading Policy includes:
Append Data (insert Into): appends data to the existing data in the target table without modifying historical data. When a primary key or constraint violation occurs, a dirty data error is reported.
Overwrite On Primary Key Conflict (replace Into): when a primary key or constraint violation occurs, the system first deletes the entire row of old data with the duplicate primary key, and then inserts the new data.
Update On Primary Key Conflict (on Duplicate Key Update): when a primary key or constraint violation occurs, the system updates the data of the mapped fields on the existing record.
Batch Write Data Size (optional)
The size of data to be written at a time. You can also set Batch Write Record Count. The system writes data when either of the two limits is reached. The default value is 32M.
Batch Write Record Count (optional)
The default value is 2048 records. When data is synchronized and written, a batch write strategy is used. The parameters include Batch Write Record Count and Batch Write Data Size.
When the accumulated data reaches either of the limits (the data size or record count limit for batch writing), the system considers a batch of data to be full and immediately writes the batch of data to the destination at once.
We recommend that you set the batch write data size to 32 MB. For the batch write record count limit, you can flexibly adjust it based on the actual size of a single record, typically setting it to a large value to fully utilize the advantages of batch writing. For example, if the size of a single record is approximately 1 KB, you can set the batch write data size to 16 MB. Considering this condition, set the batch write record count to a value greater than the result of dividing 16 MB by the single record size of 1 KB (that is, greater than 16384 records), such as 20000 records. With this configuration, the system triggers batch writes based on the batch write data size. Whenever the accumulated data reaches 16 MB, a write operation is performed.
Preparation Statement (optional)
The SQL script to be executed on the database before data import.
For example, to ensure continuous service availability, before the current step writes data, it first creates a target table Target_A, then writes data to Target_A. After the current step completes writing data, it renames the Service_B table (which continuously provides services in the database) to Temp_C, then renames the Target_A table to Service_B, and finally deletes Temp_C.
Completion Statement (optional)
The SQL script to be executed on the database after data import.
Field Mapping
Input Fields
Displays the input fields based on the output of the upstream component.
Output Fields
Displays the output fields. You can perform the following operations:
Field Management: Click Field Management to select output fields.

Click the
icon to move Selected Input Fields to Unselected Input Fields.Click the
icon to move Unselected Input Fields to Selected Input Fields.
Batch Add: Click Batch Add to configure fields in JSON, TEXT, or DDL format.
To configure fields in JSON format, use the following example:
// Example: [{ "name": "user_id", "type": "String" }, { "name": "user_name", "type": "String" }]Note`name` specifies the name of the imported field, and `type` specifies the field type after import. For example,
"name":"user_id","type":"String"imports the field named `user_id` and sets its type to `String`.To configure fields in TEXT format, use the following example:
// Example: user_id,String user_name,StringThe row delimiter is used to separate the information of each field. The default is a line feed (\n). Line feed (\n), semicolon (;), and period (.) are supported.
The column delimiter is used to separate the field name and field type. The default is a comma (,).
To configure fields in DDL format, use the following example:
CREATE TABLE tablename ( id INT PRIMARY KEY, name VARCHAR(50), age INT );
Create Output Field: Click +Create Output Field, enter the Column name and select the Type as prompted. After completing the configuration for the current row, click the
icon to save.
Mapping
Based on the upstream input and the fields of the target table, you can manually select field mappings. Mapping includes Same Row Mapping and Same Name Mapping.
Same Name Mapping: maps fields with the same name.
Same Row Mapping: maps fields in the same row when the field names in the source and target tables are different but the data in the corresponding rows needs to be mapped.
Click OK to complete the property configuration of the MySQL output component.