Data Transmission Service (DTS) provides the extract, transform, and load (ETL) feature. You can use this feature to clean and transform streaming data, and to obtain the data that you need accurately and efficiently. This topic describes how to configure an ETL task in directed acyclic graph (DAG) mode.

Background information

Note The ETL feature is in public preview. You can apply for a free trial of this feature. If you have questions during the free trial, join the DingTalk group 32326646 for technical support.
  • Before you configure an ETL task, take note of the following information:
    • Input/Dimension Table indicates the source database of the ETL task.
    • Output indicates the destination database of the ETL task.
  • DTS provides the streaming ETL feature for the data synchronization process. You can add transformation components between the source and destination databases. You can transform data and write the processed data to the destination database in real time. For example, you can join two stream tables into a large table and write the data of the large table to the destination database. You can also add a field to the source table and configure a function to assign values to the field. Then, you can write the field to the destination database.

Prerequisites

  • An ETL task is created in the China (Hangzhou), China (Shanghai), China (Qingdao), China (Beijing), China (Zhangjiakou), China (Shenzhen), or China (Guangzhou) region.
  • The source database belongs to one of the following types: self-managed MySQL databases, ApsaraDB RDS for MySQL, PolarDB for MySQL, PolarDB-X V1.0 (formerly DRDS), self-managed Oracle databases, self-managed PostgreSQL databases, ApsaraDB RDS for PostgreSQL, Db2 for LUW, Db2 for i, and PolarDB for PostgreSQL.
  • The destination database belongs to one of the following types: self-managed MySQL databases, ApsaraDB RDS for MySQL, PolarDB for MySQL, AnalyticDB for MySQL V3.0, self-managed Oracle databases, self-managed PostgreSQL databases, ApsaraDB RDS for PostgreSQL, Db2 for LUW, Db2 for i, and PolarDB for PostgreSQL.
  • The schemas of tables in the destination database are created. This is because the ETL feature does not support schema migration. For example, Table A contains Field 1, Field 2, and Field 3, and Table B contains Field 2, Field 3, and Field 4. If you want to join Table A and Table B into a table that contains Field 2 and Field 3, you must create Table C that contains Field 2 and Field 3 in the destination database.
  • The ETL feature does not support full data synchronization. Therefore, you can transform only incremental data in real time.

Precautions

  • When you configure an ETL task in the DTS console, the connection templates used for the source and destination databases must be different. You must create different connection templates for the source and destination databases. For more information, see Create a connection template.
  • The source and destination databases must reside in the same region.
  • All stream tables must belong to the same instance.
  • All database names and table names must be unique.
  • The transformation components have the following limits:
    • JOIN: You can perform JOIN operations only between two tables.
    • Field Calculator and Table Record Filter: Only a single table schema is supported.

Configuration process

Configure an ETL task
When you configure an ETL task, you can select one or more source databases, one or more transformation components, and a destination database. The following table describes how to create an ETL task.
Step Description References
Configure source databases You can select one or more source databases in the Input/Dimension Table section and set parameters for each source database.
Note You can select the same type of source database multiple times.
  1. Configure the node information.
  2. Configure the field information.
  3. Optional:If you select Stream Table as the node type, you must configure the time attributes.
Configure source databases
Configure transformation components You can select one or more transformation components in the Transform section and set parameters for each transformation component.
Note You can select the same transformation component multiple times.
  • JOIN
    1. Configure the node information.
    2. Configure the field information.
  • Field Calculator
    1. Enter the name of the transformation component.
    2. Configure the calculator.
  • Table Record Filter
    1. Enter the name of the transformation component.
    2. Specify a WHERE condition.
Configure the transformation components
Configure a destination database You can select a destination database type in the Output section and set parameters for the destination database.
  1. Configure the node information.
  2. Configure the field mappings between the source and destination databases.
Configure the destination database

Procedure

  1. Go to the ETL page.
    Note

    You can also perform the following steps to configure an ETL task in the Data Management (DMS) console:

    • Go to the DMS console.
    • In the top navigation bar, click DTS. Then, in the left-side navigation pane, choose Data integration > Streaming ETL.
    • Click Create Data Flow. In the Create Data Flow dialog box, specify an ETL task name in the Data Flow Name field and set the Development Method parameter to DAG.
    • Click OK.
  2. In the upper-left corner of the page, select the region where you want to create an ETL task.
  3. In the left-side navigation pane, click ETL.
  4. On the ETL page, click Create Task (Pay-as-you-go).
  5. On the Create Task page, set parameters for the ETL task.
    1. Create an ETL task.
      Note
      • In this example, MySQL is selected in the Input/Dimension Table section, Field Calculator is selected in the Transform section, and MySQL is selected in the Output section.
      • You can select one or more source databases. You can select the same type of source database multiple times.
      • You can select one or more transformation components. You can select the same transformation component multiple times.
      • You can select only a single destination database.
      1. In the Input/Dimension Table section on the left side of the page, select MySQL and drag it to the canvas on the right side of the page.
      2. Click MySQL on the canvas. In the Input/Dimension Table (MySQL) section, configure the node information, field information, and time attributes. For more information, see Configure source databases.
      3. In the Transform section on the left side of the page, select Field Calculator and drag it to the canvas on the right side of the page.
      4. Move the pointer over the source MySQL database and click the dot to draw a line between the source MySQL database and the Field Calculator component. Connect the source database to the field calculator
      5. In the Node Settings section, complete the node settings and calculator settings. For more information, see Configure the transformation components.
      6. In the Output section on the left side of the page, select MySQL and drag it to the canvas on the right side of the page.
      7. Move the pointer over Field Calculator and click the dot to draw a line between the Field Calculator component and the destination MySQL database. Connect the field calculator to the destination database
      8. On the Output MySQL panel, complete the node settings and field mappings. For more information, see Configure the destination database.
      9. After you complete the configurations, click Save and Return, Generate Flink SQL Validator, or Next: Save Task Settings and Precheck.
        • Save and Return: You can save the configurations of the ETL task as a template to improve configuration efficiency.
        • Generate Flink SQL Validator: If Flink SQL validation succeeds, the Flink SQL Validation Succeeded message is displayed. If the validation fails, you can click View ETL Validation Details next to Generate Flink SQL Validator. After you troubleshoot the issues based on the error message, you can perform Flink SQL validation again.
        • Next: Save Task Settings and Precheck: The system runs Flink SQL validation. The ETL task can be prechecked only after Flink SQL validation is successful. You can click View ETL Validation Details to view details.
    2. After Flink SQL validation is completed, run a precheck. DTS can start the ETL task only after the task passes the precheck. If the task fails to pass the precheck, click View Details next to each failed item. After you troubleshoot the issues based on the causes, you can run a precheck again.
    3. After the precheck is completed, click Next: Purchase Instance in the lower part of the page.
    4. On the Purchase Instance page, specify the Instance Class and Compute Units (CUs) parameters. Then, read and select Data Transmission Service (Pay-as-you-go) Service Terms and Service Terms for Public Preview.
    5. Click Buy and Start to start the ETL task.