You must synchronize raw data to MaxCompute during data preparation.

Prepare the source data store

  1. Create an ApsaraDB RDS for MySQL instance in the ApsaraDB RDS console and take note of the instance ID. For more information, see Create an ApsaraDB RDS for MySQL instance.
  2. Configure a whitelist for the RDS instance in the ApsaraDB RDS console. For more information, see Configure whitelists.
    Note If you use a custom resource group to run the sync node, you must add the IP addresses of the servers in the custom resource group to the whitelist of the RDS instance.
  3. Download the raw data required in this tutorial: indicators_data, steal_flag_data, and trend_data.
  4. Upload the raw data to the RDS instance. For more information, see Import data from Excel to ApsaraDB RDS for MySQL.

Create a connection

Note In this example, you must create a MySQL connection.
  1. Go to the Data Source page.
    1. Log on to the DataWorks console.
    2. In the left-side navigation pane, click Workspaces.
    3. After you select the region where the required workspace resides, find the workspace and click Data Integration.
    4. In the left-side navigation pane, click Connection. The Workspace Management > Data Source page appears.
  2. On the Data Source page, click New data source in the upper-right corner.
  3. In the Add data source dialog box, click MySQL.
  4. In the Add MySQL data source dialog box, set the parameters as required.
    Parameter Description
    Data source type Set the parameter to Alibaba Cloud instance mode.
    Data Source Name The name of the connection can contain letters, digits, and underscores (_), and must start with a letter.
    Data source description The description of the connection. The description can be up to 80 characters in length.
    Environment Valid values: Development and Production.
    Note This parameter is displayed only when the workspace is in standard mode.
    Region Select the required region.
    RDS instance ID You can view the ID of the RDS instance in the ApsaraDB RDS console.
    RDS instance account ID After you log on to the Alibaba Cloud Management Console with your Alibaba Cloud account, you can view your account ID used to purchase the RDS instance on the Security Settings page.
    Database name The name of the database.
    User name The username that you can use to connect to the database.
    Password The password that you can use to connect to the database.
  5. Click Test connectivity.
  6. After the connection passes the connectivity test, click Complete.

Create a workflow

  1. Click the Icon icon in the upper-left corner and choose All Products > Data Development > DataStudio.
  2. Right-click Business Flow and select Create Workflow.
  3. In the Create Workflow dialog box, set the Workflow Name and Description parameters.
    DataStudio
    Note The workflow name can be up to 128 characters in length and can contain letters, digits, underscores (_), and periods (.).
  4. Click Create.
  5. On the workflow configuration tab that appears, drag Zero-Load Node to the canvas, name the zero load node start, and then click Commit. Create three batch sync nodes in the same way for synchronizing power consumption trend data, electricity-stealing flag data, and metrics data.
  6. Draw lines between nodes and set the start node as the parent node of the three batch sync nodes.

Configure the start node

  1. Double-click the zero load node. In the right-side navigation pane, click the Properties tab.
  2. Set the root node of the workspace as the parent node of the start node.
    In the latest version of DataWorks, each node must have its parent and child nodes. Therefore, you must set a parent node for the start node. In this example, the root node of the workspace is set as the parent node of the start node. The root node of the workspace is named in the Workspace name_root format.
  3. After the configuration is completed, click the Save icon in the upper-left corner.

Create tables

  1. Click the created workflow. Then, click MaxCompute.
  2. Right-click Table under MaxCompute and select Create Table.
  3. In the Create Table dialog box, set the Table Name parameter and click Create.
    Create three tables named trend_data, indicators_data, and steal_flag_data. The trend_data table is used to store power consumption trend data, the indicators_data table is used to store metrics data, and the steal_flag_data table is used to store electricity-stealing flag data.
    Note The table name must be 1 to 64 characters in length. It must start with a letter and cannot contain special characters.
  4. On the configuration tab of each table, click DDL Statement and enter the following CREATE TABLE statements:
    -- Create a table for storing power consumption trend data.
    CREATE TABLE `trend_data` (
        `uid` bigint,
        `trend` bigint
    )
    PARTITIONED BY (dt string);
    -- Create a table for storing metrics data.
    CREATE TABLE `indicators_data` (
        `uid` bigint,
        `xiansun` bigint,
        `warnindicator` bigint
    )
    COMMENT '*'
    PARTITIONED BY (ds string)
    LIFECYCLE 36000;
    -- Create a table for storing electricity-stealing flag data.
    CREATE TABLE `steal_flag_data` (
        `uid` bigint,
        `flag` bigint
    )
    COMMENT '*'
    PARTITIONED BY (ds string)
    LIFECYCLE 36000;
  5. After you enter the CREATE TABLE statements, click Generate Table Schema. Then, click OK.
  6. On the configuration tab of each table, enter the display name in the General section.
  7. After the configuration is completed, click Commit in Development Environment and Commit to Production Environment in sequence.

Configure the batch sync nodes

Configure the node for synchronizing power consumption trend data.
  1. Double-click the node to go to the node configuration tab.
  2. Configure a connection to the source data store.
    Parameter Description
    Connection Select MySQL and workshop in sequence.
    Table Select the trending table from which data is to be synchronized.
    Filter The filter condition for the data to be synchronized. Filtering based on the limit keyword is not supported. The SQL syntax is determined by the selected connection. This parameter is optional.
    Shard Key If data sharding is performed based on the configured shard key, data can be read concurrently to improve data synchronization efficiency. This parameter is optional.
  3. Configure a connection to the destination data store.
    Parameter Description
    Connection Select ODPS and odps_first in sequence.
    Table Select the trend_data table for storing the source data.
    Partition Key Column Enter the partition key column to be synchronized. Default value: dt=${bdp.system.bizdate}.
    Writing Rule Select Write with Original Data Deleted (Insert Overwrite).
    Convert Empty Strings to Null Select No.
  4. Configure the mappings between fields in the source and destination.
  5. Set parameters in the Channel section.
    Channel
    Parameter Description
    Expected Maximum Concurrency The maximum number of concurrent threads that the sync node uses to read data from the source data store and write data to the destination data store. You can configure the concurrency for the sync node on the codeless user interface (UI).
    Bandwidth Throttling You can enable bandwidth throttling and set a maximum transmission rate to avoid heavy read workload of the source data store. We recommend that you enable bandwidth throttling and set the maximum transmission rate to a proper value based on the configurations of the source data store.
    Dirty Data Records Allowed The maximum number of dirty data records that are allowed.
  6. Verify that the preceding configuration is correct and click the Save icon in the upper-left corner.

Commit the workflow

  1. Go to the workflow configuration tab and click the Submit icon in the upper-left corner.
  2. In the Commit dialog box, select the nodes to be committed, enter your comments in the Change description field, and then select Ignore I/O Inconsistency Alerts.
  3. Click Commit. The Committed successfully message appears.

Verify data synchronization to MaxCompute

  1. On the left-side navigation submenu, click the Ad-Hoc Query icon. The Ad-Hoc Query tab appears.
  2. Right-click Ad-Hoc Query and choose Create Node > ODPS SQL.
  3. Write and execute SQL statements to query the number of data records synchronized to the trend_data, indicators_data, and steal_flag_data tables.
    Use the following SQL statements. In each statement, change the partition key value to the data timestamp. For example, if the node is run on August 9, 2019, the data timestamp is 20190808, which is one day before the node is run.
    -- Check whether the data is written to MaxCompute.
    SELECT count(*) from trend_data where dt=Data timestamp of the ad-hoc query node;
    SELECT count(*) from indicators_data where ds=Data timestamp of the ad-hoc query node;
    SELECT count(*) from steal_flag_data where ds=Data timestamp of the ad-hoc query node;

What to do next

You have learned how to collect and synchronize data. You can now proceed with the next tutorial. The next tutorial describes how to compute and analyze collected data.