All Products
Search
Document Center

DataWorks:Use Data Integration to import data to DataHub

Last Updated:Apr 08, 2024

This topic describes how to use Data Integration to import offline data to DataHub. In this example, a batch synchronization node is configured by using the code editor to import data from Stream to DataHub.

Prerequisites

  1. An Alibaba Cloud account and its AccessKey pair are created. For more information, see Activate DataWorks.

  2. MaxCompute is activated. After you activate MaxCompute, a default MaxCompute data source is automatically generated. The Alibaba Cloud account is used to log on to the DataWorks console.

  3. A workspace is created in the DataWorks console. This way, you can collaborate with other members in the workspace to develop workflows and maintain data and tasks in the workspace. For information about how to create a workspace, see Create a workspace.

    Note

    If you want to create a data integration task as a RAM user, grant the required permissions to the RAM user. For information about how to create a RAM user and grant permissions to the RAM user, see Prepare a RAM user and Manage permissions on workspace-level services.

Background information

Data Integration is a data synchronization platform that is provided by Alibaba Cloud. The platform is reliable, secure, cost-effective, and scalable. It can be used to synchronize data across heterogeneous data storage systems and provides offline data synchronization channels for more than 20 types of data sources in diverse network environments. Supported data source types, Reader plug-ins, and Writer plug-ins

In this example, a DataHub data source is used. For information about how to use other types of data sources to configure synchronization tasks, see Supported data source types and synchronization operations.MaxCompute Writer

Procedure

  1. Go to the DataStudio page.

    1. Log on to the DataWorks console.

    2. In the left-side navigation pane, click Workspaces.

    3. In the top navigation bar, select the region in which the created workspace resides. On the Workspaces page, find the workspace and choose Shortcuts > Data Development in the Actions column.

  2. In the Scheduled Workflow pane of the DataStudio page, find the desired workflow and click its name. Right-click Data Integration and choose Create Node > Offline synchronization.

  3. In the Create Node dialog box, configure the Name and Path parameters and click Confirm.

    Note
    • The task name cannot exceed 128 characters in length.

    • The Path parameter specifies the auto triggered workflow in which you want to create the batch synchronization task. For information about how to create an auto triggered workflow, see the "Create an auto triggered workflow" section in Create a workflow.

  4. After the batch synchronization task is created, configure items such as network connectivity and resources based on your business requirements and click Next. Then, click the 转换脚本 icon in the top toolbar of the configuration tab of the batch synchronization task.

  5. In the Tips message, click OK to switch to the code editor.

  6. Click the 导入模板 icon in the top toolbar.

  7. In the Import Template dialog box, configure the Source type, Target type, and Data source parameters to generate an import template used to import data from Stream to DataHub. Then, click Confirmation.

  8. After the template is imported, edit code in the code editor based on your business requirements.

    {
    "type": "job",
    "version": "1.0",
    "configuration": {
     "setting": {
       "errorLimit": {
         "record": "0"
       },
       "speed": {
         "mbps": "1",// The maximum transmission rate. Unit: MB/s. 
         "concurrent": 1,// The maximum number of parallel threads. 
         "throttle": false
       }
     },
     "reader": {
       "plugin": "stream",
       "parameter": {
         "column": [// The names of the columns from which you want to read data. 
           {
             "value": "field", // The column property. 
             "type": "string"
           },
           {
             "value": true,
             "type": "bool"
           },
           {
             "value": "byte string",
             "type": "bytes"
           }
         ],
         "sliceRecordCount": "100000"
       }
     },
     "writer": {
       "plugin": "datahub",
       "parameter": {
         "datasource": "datahub",// The name of the data source. 
         "topic": "xxxx",// The minimum unit for data subscription and publication in DataHub. You can use topics to distinguish different types of streaming data. 
         "mode": "random",// The write mode. The value random indicates that data is randomly written. 
         "shardId": "0",// Shards are parallel channels that are used for data transmission in a topic. Each shard has a unique ID. 
         "maxCommitSize": 524288,// The amount of data that Data Integration buffers before Data Integration sends the data to the destination for the purpose of improving writing efficiency. Unit: MB. The default value is 1 MB. 
         "maxRetryCount": 500
       }
     }
    }
    }
  9. After the configuration is complete, click the 保存 and 运行 icons in the top toolbar of the configuration tab of the batch synchronization task.

    Note
    • You can import data to DataHub only in the code editor.

    • If you want to change the template, click the 导入模板 icon in the top toolbar. The original content is overwritten after you apply the new template.

    • If you click the 运行 icon after you save the batch synchronization task, the task is immediately run.

      You can also click the 提交 icon to commit the batch synchronization task to the scheduling system. The scheduling system periodically runs the batch synchronization task from the next day based on the properties configured for the task.