The Amazon RDS for MySQL output component writes data to an Amazon RDS for MySQL data source. When you sync data from other data sources to an Amazon RDS for MySQL data source, you must configure this component after you configure the source data source. This topic describes how to configure the Amazon RDS for MySQL output component.
Prerequisites
You have created an Amazon RDS for MySQL data source. For more information, see Create an Amazon RDS for MySQL Data Source.
The account that you use to configure the Amazon RDS for MySQL output component must have write-through permission on the data source. If the account does not have this permission, you must request it. For more information, see Request, Renew, or Release Data Source Permissions.
Procedure
In the top menu bar on the Dataphin homepage, choose Develop > Data Integration.
In the top menu bar on the Integration page, select Project. In Dev-Prod mode, select Environment.
In the navigation pane on the left, click Batch Pipeline. In the Batch Pipeline list, click the offline pipeline that you want to develop. The configuration page for the pipeline opens.
In the upper-right corner of the page, click Component Library to open the Component Library panel.
In the navigation pane on the left of the Component Library panel, click Output. In the output component list on the right, find the Amazon RDS for MySQL component and drag it onto the canvas.
Click and drag the
icon from an input, transform, or flow component to connect it to the Amazon RDS for MySQL output component.Click the
icon in the Amazon RDS for MySQL output component card to open the Amazon RDS for MySQL Output Configuration dialog box. 
In the Amazon RDS for MySQL Output Configuration dialog box, configure the parameters.
Parameter
Description
Basic Settings
Step Name
The name of the Amazon RDS for MySQL output component. Dataphin generates a step name automatically. You can change it based on your business scenario. Naming rules:
Use only Chinese characters, letters, underscores (_), and digits.
Do not exceed 64 characters.
Datasource
The drop-down list shows all Amazon RDS for MySQL data sources. It includes data sources for which you have write-through permission and those for which you do not. Click the
icon to copy the current data source name.If you do not have write-through permission for a data source, click Request next to the data source to request write-through permission. For more information, see Request, Renew, or Release Data Source Permissions.
If you do not have an Amazon RDS for MySQL data source, click Create Data Source to create one. For more information, see Create an Amazon RDS for MySQL Data Source.
Database (optional)
Select the database where the table resides. If you leave this field blank, the database specified when you registered the data source is used.
Table
Select the destination table for the output data. Enter a keyword to search for tables or enter the exact table name and click Exact Search. After you select a table, the system automatically checks its status. Click the
icon to copy the selected table name.If the target table does not exist in the Amazon RDS for MySQL data source, use the one-click table creation feature to generate it quickly. To do so:
Click One-Click Table Creation. Dataphin automatically matches code to create the destination table, including the table name (default: source table name) and field types (preliminarily converted based on Dataphin fields).
Modify the SQL script for table creation as needed, then click Create. After the table is created, Dataphin automatically uses it as the destination table for output data.
NoteIf a table with the same name exists in the development environment, clicking Create returns an error that the table already exists.
If no matching table is found, you can still integrate using a manually entered table name.
Production Table Missing Policy
The action to take when the production table does not exist. Choose No Action or Automatic Creation. Default: Automatic Creation. If you choose No Action, the task publishes without creating the production table. If you choose Automatic Creation, the task creates a table with the same name in the target environment during publishing.
No Action: If the destination table does not exist, the system warns you during submission but lets you publish anyway. You must create the destination table in the production environment before running the task.
Automatic Creation: You need to Edit The CREATE TABLE Statement, which is pre-filled by default with the CREATE TABLE statement for the selected table. You can adjust it. The table name in the CREATE TABLE statement uses the placeholder
${table_name}, and only this placeholder is allowed. At runtime, it will be replaced with the actual table name.If the destination table does not exist, Dataphin first creates it using the statement. If creation fails, publishing fails. Fix the statement based on the error message and republish. If the destination table already exists, Dataphin skips table creation.
NoteThis setting is available only for projects in Dev-Prod mode.
Loading Policy
Select the policy for writing data to the destination table. Loading Policy options:
Append Data (INSERT INTO): Append data to existing data in the destination table without modifying historical data. On primary key or constraint violations, dirty data errors appear.
Replace on Primary Key Conflict (REPLACE INTO): Delete the entire row with a duplicate primary key or constraint, then insert the new row.
Update on Primary Key Conflict (ON DUPLICATE KEY UPDATE): Update mapped fields in existing records on primary key or constraint violations.
Batch Write Size (optional)
The size of data written at once. You can also set Batch Write Count. During writing, the system uses whichever limit—size or count—is reached first. Default: 32 MB.
Batch Write Count (optional)
Default: 2,048 rows. Data synchronization uses batched writes. Parameters include Batch Write Count and Batch Write Size.
When the accumulated data volume reaches either limit—the configured size or count—the system treats it as a full batch and writes it to the destination immediately.
We recommend setting the batch write size to 32 MB. Adjust the batch write count based on the average record size. Set it high to maximize batch efficiency. For example, if each record is about 1 KB, set the batch write size to 16 MB and the batch write count to more than 16,384 (16 MB ÷ 1 KB). Here, we use 20,000 rows. With this setup, the system triggers batch writes when the accumulated data reaches 16 MB.
Pre-SQL Statement (optional)
An SQL script to run on the database before data import.
For example, to maintain service availability: before writing data, create the target table Target_A. After writing to Target_A, rename the live service table Service_B to Temp_C. Then rename Target_A to Service_B. Finally, delete Temp_C.
Post-SQL Statement (optional)
An SQL script to run on the database after data import.
Field Mapping
Input Fields
Lists input fields from upstream components.
Output Fields
Lists output fields. You can:
Manage fields: Click Field Management to select output fields.

Click the
icon to move Selected Input Fields to Unselected Input Fields.Click the
icon to move Unselected Input Fields to Selected Input Fields.
Batch add: Click Batch Add to configure fields in JSON, TEXT, or DDL format.
Configure settings in batches using JSON format, such as:
// Example: [{ "name": "user_id", "type": "String" }, { "name": "user_name", "type": "String" }]NoteThe name field is the field name. The type field is the data type. For example,
"name":"user_id","type":"String"imports the field named user_id and sets its type to String.Batch configure in TEXT format, such as:
// Example: user_id,String user_name,StringThe row delimiter separates field entries. Default: line feed (\n). Supported delimiters: \n, semicolon (;), and period (.).
The column delimiter separates field names from types. Default: comma (,).
DDL format example:
CREATE TABLE tablename ( id INT PRIMARY KEY, name VARCHAR(50), age INT );
Create an output field: Click + Create Output Field. Enter the Column name and select the Type. Click the
icon to save the row.
Mapping
Manually map fields between upstream inputs and destination table fields. Mapping options: Row Mapping and Name Mapping.
Name Mapping: Maps fields with identical names.
Row Mapping: Maps fields by position when source and destination field names differ.
Click Confirm to finish configuring the Amazon RDS for MySQL output component.