All Products
Search
Document Center

Dataphin:Configure the AnalyticDB for MySQL 3.0 output component

Last Updated:Feb 12, 2026

The AnalyticDB for MySQL 3.0 output component writes data to a MySQL data source. When synchronizing data from other data sources to an AnalyticDB for MySQL 3.0 data source, you need to configure both the source data source information and the target data source for the AnalyticDB for MySQL 3.0 output component. This topic describes how to configure the AnalyticDB for MySQL 3.0 output component.

Prerequisites

Procedure

  1. In the top navigation bar of the Dataphin homepage, choose Develop > Data Integration.

  2. In the top navigation bar of the integration page, select a Project (In Dev-Prod mode, you need to select an Environment).

  3. In the navigation pane on the left, click Batch Pipeline. In the Batch Pipeline list, click the offline pipeline that you want to develop to open its configuration page.

  4. Click Component Library in the upper-right corner of the page to open the Component Library panel.

  5. In the navigation pane on the left of the Component Library panel, select Outputs. Find the AnalyticDB for MySQL 3.0 component in the output component list on the right and drag it to the canvas.

  6. Click and drag the image icon of the target input, transform, or flow component to connect it to the current AnalyticDB for MySQL 3.0 output component.

  7. Click the image icon on the AnalyticDB for MySQL 3.0 output component card to open the AnalyticDB for MySQL 3.0 Output Configuration dialog box.image

  8. In the AnalyticDB For MySQL 3.0 Output Configuration dialog box, configure the parameters.

    Parameter

    Description

    Basic Settings

    Step Name

    The name of the AnalyticDB for MySQL 3.0 output component. Dataphin automatically generates a step name, which you can modify based on your business scenario. The name must meet the following requirements:

    • It can contain only Chinese characters, letters, underscores (_), and digits.

    • It cannot exceed 64 characters in length.

    Datasource

    The data source dropdown list displays all AnalyticDB for MySQL 3.0 data sources, including those for which you have the write permission and those for which you do not have the write permission. Click the image icon to copy the current data source name.

    Time Zone

    The time zone used to process time format data. The default value is the time zone configured in the selected data source. This parameter cannot be modified.

    Note

    For tasks created before V5.1.2, you can select Data Source Default Configuration or Channel Configuration Time Zone. The default value is Channel Configuration Time Zone.

    • Data Source Default Configuration: the default time zone of the selected data source.

    • Channel Configuration Time Zone: the time zone configured in Properties > Channel Configuration for the current integration task.

    Table

    Select the target table for output data. You can enter a keyword to search for a table or enter the exact table name and click Exact Match. After you select a table, the system automatically checks the table status. Click the image icon to copy the name of the currently selected table.

    If the target table for data synchronization does not exist in the AnalyticDB for MySQL 3.0 data source, you can use the one-click table creation feature to quickly generate the target table. The detailed steps are as follows:

    1. Click Create Table With One Click. Dataphin automatically matches the code for creating the target table, including the target table name (default is the source table name), field types (initially converted based on Dataphin fields), and other information.

    2. You can modify the SQL script for creating the target table as needed, and then click Create. After the target table is successfully created, Dataphin automatically sets it as the target table for output data. The one-click table creation feature is used to create target tables for data synchronization in the development and production environments. Dataphin selects the production environment for table creation by default. If a table with the same name and structure already exists in the production environment, you do not need to select table creation for the production environment.

    Note

    If a table with the same name already exists in the development or production environment, Dataphin will report an error indicating that the table already exists when you click Create.

    Loading Policy

    Select the policy for writing data to the target table. Loading Policy includes:

    • Append Data (insert Into): appends data to the existing data in the target table without modifying historical data. When a primary key or constraint violation occurs, a dirty data error is reported.

    • Overwrite On Primary Key Conflict (replace Into): when a primary key or constraint violation occurs, the entire row of old data with the duplicate primary key is deleted first, and then the new data is inserted.

    • Update On Primary Key Conflict (on Duplicate Key Update): when a primary key or constraint violation occurs, the data of the mapped fields is updated on the existing record.

    Batch Write Data Volume (optional)

    The size of data to be written at once. You can also set Batch Write Count. The system will write data when either of these two limits is reached. The default value is 32M.

    Batch Write Count (optional)

    The default value is 2048 records. When data is synchronized and written, a batch write strategy is used. The parameters include Batch Write Count and Batch Write Data Volume.

    • When the accumulated data volume reaches either of the set limits (i.e., the batch write data volume or count limit), the system considers a batch of data to be full and immediately writes this batch of data to the target at once.

    • We recommend that you set the batch write data volume to 32MB. For the batch write count limit, you can adjust it flexibly based on the actual size of a single record to fully utilize the advantages of batch writing. For example, if the size of a single record is about 1KB, you can set the batch write data volume to 16MB and the batch write count to a value greater than the result of 16MB divided by the single record size of 1KB (i.e., greater than 16384 records), such as 20000 records. With this configuration, the system will trigger batch writes based on the batch write data volume. Whenever the accumulated data volume reaches 16MB, a write operation will be performed.

    Prepare Statement (optional)

    The SQL script to be executed on the database before data import.

    For example, to ensure continuous service availability, before the current step writes data, it first creates a target table Target_A, then writes data to Target_A. After the current step completes writing data, it renames the continuously serving table Service_B to Temp_C, then renames Target_A to Service_B, and finally deletes Temp_C.

    Post Statement (optional)

    The SQL script to be executed on the database after data import.

    Field Mapping

    Input Fields

    Displays the input fields based on the upstream input.

    Output Fields

    Displays the output fields. You can perform the following operations:

    • Field management: Click Field Management to select output fields.

      image

      • Click the gaagag icon to move Selected Input Fields to Unselected Input Fields.

      • Click the agfag icon to move Unselected Input Fields to Selected Input Fields.

    • Batch add: Click Batch Add to configure in JSON, TEXT, or DDL format.

      • Batch configuration in JSON format, for example:

        // Example:
        [{
          "name": "user_id",
          "type": "String"
         },
         {
          "name": "user_name",
          "type": "String"
         }]
        Note

        The `name` parameter specifies the name of the field to import, and the `type` parameter specifies the field type after the import. For example, "name":"user_id","type":"String" imports the field named `user_id` and sets its type to `String`.

      • Batch configuration in TEXT format, for example:

        // Example:
        user_id,String
        user_name,String
        • The row delimiter is used to separate the information of each field. The default is a line feed (\n). Supported delimiters include line feed (\n), semicolon (;), and period (.).

        • The column delimiter is used to separate the field name and field type. The default is a comma (,).

      • Batch configuration in DDL format, for example:

        CREATE TABLE tablename (
            id INT PRIMARY KEY,
            name VARCHAR(50),
            age INT
        );
    • Create a new output field: Click +Create Output Field, fill in the Column and select the Type as prompted. After configuring the current row, click the image icon to save.

    Mapping

    Based on the upstream input and the fields of the target table, you can manually select field mappings. Mapping includes Same Row Mapping and Same Name Mapping.

    • Same name mapping: Maps fields with the same name.

    • Same row mapping: Maps fields in the same row when the field names in the source and target tables are inconsistent but the data in the corresponding rows needs to be mapped. Only maps fields in the same row.

  9. Click OK to complete the property configuration of the AnalyticDB for MySQL 3.0 output component.