This topic describes how to run a loop for SQL tasks in Data Lake Analytics (DLA).

  1. Log on to the DataWorks console. Click Workspaces in the left-side navigation pane. On the page that appears, find your workspace and click Data Integration in the Actions column.
  2. On the Welcome to Data Integration page, click Connection in the Data Store section. In the left-side navigation pane, click Data Source. On the page that appears, click New data source in the upper-right corner. In the Add data source dialog box, click Data Lake Analytics(DLA) in Big Data Storage.
  3. In the Add Data Lake Analytics(DLA) data source dialog box, configure the parameters.

    The following table describes the required parameters.

    Data Source Name The name of the data source. We recommend that you specify an informative name that is easy to identify.
    Data source description The description of the data source. This parameter is optional.
    Connection Url The endpoint of DLA. The value is in the format of IP address:Port number. For more information about how to obtain the IP address and port number, see Create an endpoint.
    Database The name of the database in Object Storage Service (OSS) to which DLA is connected. In this topic, set this parameter to dataworks_demo.
    User name The username that is used to log on to DLA.
    Password The password of the username.
  4. Modify an IP address whitelist of DLA in DataWorks.

    DataWorks allows you to add only the data sources whose IP addresses or CIDR blocks are included in an IP address whitelist of DLA to DLA. Therefore, you must add the IP addresses or Classless Inter-Domain Routing (CIDR) blocks to the IP address whitelist of DLA based on the region where DLA is deployed.

    China (Hangzhou) 100.64.0.0/8,11.193.102.0/24,11.193.215.0/24,11.194.110.0/24,11.194.73.0/24,118.31.157.0/24,47.97.53.0/24,11.196.23.0/24,47.99.12.0/24,47.99.13.0/24,114.55.197.0/24,11.197.246.0/24,11.197.247.0/24
    China (Shanghai) 11.193.109.0/24,11.193.252.0/24,47.101.107.0/24,47.100.129.0/24,106.15.14.0/24,10.117.28.203,10.117.39.238,10.143.32.0/24,10.152.69.0/24,10.153.136.0/24,10.27.63.15,10.27.63.38,10.27.63.41,10.27.63.60,10.46.64.81,10.46.67.156,11.192.97.0/24,11.192.98.0/24,11.193.102.0/24,11.218.89.0/24,11.218.96.0/24,11.219.217.0/24,11.219.218.0/24,11.219.219.0/24,11.219.233.0/24,11.219.234.0/24,118.178.142.154,118.178.56.228,118.178.59.233,118.178.84.74,120.27.160.26,120.27.160.81,121.43.110.160,121.43.112.137,100.64.0.0/8
    China (Shenzhen) 100.106.46.0/24,100.106.49.0/24,10.152.27.0/24,10.152.28.0/24,11.192.91.0/24,11.192.96.0/24,11.193.103.0/24,100.64.0.0/8,120.76.104.0/24,120.76.91.0/24,120.78.45.0/24
    China (Hong Kong) 10.152.162.0/24,11.192.196.0/24,11.193.11.0/24,100.64.0.0/8,11.192.196.0/24,47.89.61.0/24,47.91.171.0/24,11.193.118.0/24,47.75.228.0/24
    Singapore (Singapore) 100.106.10.0/24,100.106.35.0/24,10.151.234.0/24,10.151.238.0/24,10.152.248.0/24,11.192.153.0/24,11.192.40.0/24,11.193.8.0/24,100.64.0.0/8,100.106.10.0/24,100.106.35.0/24,10.151.234.0/24,10.151.238.0/24,10.152.248.0/24,11.192.40.0/24,47.88.147.0/24,47.88.235.0/24,11.193.162.0/24,11.193.163.0/24,11.193.220.0/24,11.193.158.0/24,47.74.162.0/24,47.74.203.0/24,47.74.161.0/24,11.197.188.0/24
    Australia (Sydney) 11.192.100.0/24,11.192.134.0/24,11.192.135.0/24,11.192.184.0/24,11.192.99.0/24,100.64.0.0/8,47.91.49.0/24,47.91.50.0/24,11.193.165.0/24,47.91.60.0/24
    China (Beijing) 100.106.48.0/24,10.152.167.0/24,10.152.168.0/24,11.193.50.0/24,11.193.75.0/24,11.193.82.0/24,11.193.99.0/24,100.64.0.0/8,47.93.110.0/24,47.94.185.0/24,47.95.63.0/24,11.197.231.0/24,11.195.172.0/24,47.94.49.0/24,182.92.144.0/24
    US (Silicon Valley) 10.152.160.0/24,100.64.0.0/8,47.89.224.0/24,11.193.216.0/24,47.88.108.0/24
    US (Virginia) 11.193.203.0/24,11.194.68.0/24,11.194.69.0/24,100.64.0.0/8,47.252.55.0/24,47.252.88.0/24
    Malaysia (Kuala Lumpur) 11.193.188.0/24,11.221.205.0/24,11.221.206.0/24,11.221.207.0/24,100.64.0.0/8,11.214.81.0/24,47.254.212.0/24,11.193.189.0/24
    Germany (Frankfurt) 11.192.116.0/24,11.192.168.0/24,11.192.169.0/24,11.192.170.0/24,11.193.106.0/24,100.64.0.0/8,11.192.116.14,11.192.116.142,11.192.116.160,11.192.116.75,11.192.170.27,47.91.82.22,47.91.83.74,47.91.83.93,47.91.84.11,47.91.84.110,47.91.84.82,11.193.167.0/24,47.254.138.0/24
    Japan (Tokyo) 100.105.55.0/24,11.192.147.0/24,11.192.148.0/24,11.192.149.0/24,100.64.0.0/8,47.91.12.0/24,47.91.13.0/24,47.91.9.0/24,11.199.250.0/24,47.91.27.0/24
    UAE (Dubai) 11.192.107.0/24,11.192.127.0/24,11.192.88.0/24,11.193.246.0/24,47.91.116.0/24,100.64.0.0/8
    India (Mumbai) 11.194.10.0/24,11.246.70.0/24,11.246.71.0/24,11.246.73.0/24,11.246.74.0/24,100.64.0.0/8,149.129.164.0/24,11.194.11.0/24
    UK (London) 11.199.93.0/24,100.64.0.0/8
    Indonesia (Jakarta) 11.194.49.0/24,11.200.93.0/24,11.200.95.0/24,11.200.97.0/24,100.64.0.0/8,149.129.228.0/24,10.143.32.0/24,11.194.50.0/24
    China North 2 Ali Gov 1 If the CIDR block 11.194.116.0/24 or 100.64.0.0/8 cannot be added to the IP address whitelist, add the following IP addresses: 11.194.116.160, 11.194.116.161, 11.194.116.162, 11.194.116.163, 11.194.116.164, 11.194.116.165, 11.194.116.167, 11.194.116.169, 11.194.116.170, 11.194.116.171, 11.194.116.172, 11.194.116.173, 11.194.116.174, and 11.194.116.175
  5. After you configure the preceding parameters, find your resource group and click Test connectivity in the Operation column. After the connectivity test succeeds, click Complete.

Create a workflow and nodes

  1. Log on to the DataWorks console. In the left-side navigation pane, click Workspaces. On the page that appears, find your workspace and click Data Analytics in the Actions column.
  2. In the left-side panel, right-click Business Flow and select Create Workflow to create a workflow that is used to run a loop.
  3. Create an assignment node and a do-while node for the workflow that you have created.

Configure the assignment node

  1. Double-click the date set node. On the page that appears, select SHELL for Language, write the required date values as an array, and then save the settings.

    Use only commas (,) to separate date values.

    echo "20190424,20190425,20190426,20190427,20190428,20190429,20190430"
  2. Click the Scheduling Configurations tab to configure an upstream node for the assignment node. You can use the root node of the current workspace as the upstream node. For example, if the workspace name is dla_project, the upstream node is dla_project_root.
  3. Click Save.

Configure the do-while node

  1. Double-click the do-while node. On the page that appears, configure the node.
  2. Create a DLA task.
  3. Click the Scheduling Configurations tab. On the page that appears, specify the node dependency and context. Configure the assignment node as the upstream node. The input of the do-while node is the output of the assignment node.

Configure the DLA_SQL node

INSERT INTO finished_orders
SELECT  O_ORDERKEY
        ,O_TOTALPRICE
FROM    orders
WHERE   pure_date = ${dag.input[${dag.offset}]}
AND     O_ORDERSTATUS = 'F';
  • The value of pure_date is read from the assignment node. One value is read from the output array of the assignment node every time.
  • dag.offset is a reserved variable of DataWorks. This variable indicates the loop offset. For example, the offset is 0 when the loop is run for the first time, 1 for the second time, and 2 for the third time. This way, the offset is n-1 when the loop is run for the nth time.
  • dag.input is a variable that indicates the context of the do-while node that is configured to run the loop. If internal nodes of the do-while node need to reference the value of the context, you can use dag.$ctxKey. In this topic, Key in dag.$ctxKey is input. Therefore, you can use {dag.input} to reference the value.
  • The initial input of the dataset of the dag.input[$dag.offset] node is a table. You can use an offset to obtain a row of data from the table. The offset is incremented with the number of times a loop is performed. Therefore, the final output data may be {dag.input[0]} and ${dag.input[1]}. This ensures that all data in the dataset is traversed.

Configure the end node

To control the loop termination, the end node compares dag.loopTimes with dag.input.length. If the value of dag.loopTimes is less than the value of dag.input.length, True is returned to continue the loop. If the value of dag.loopTimes is greater than or equal to the value of dag.input.length, False is returned to terminate the loop. dag.input.length is a variable that indicates the number of rows in the array of the context parameter input. The system automatically delivers this variable based on the context configured for the end node.

if ${dag.loopTimes} < ${dag.input.length}: 
 print True
else: 
 print False

On the Scheduling Configurations tab, configure the DLA_SQL node as the upstream node of the end node.

After you configure and save the preceding settings, the loop flow diagram of the do-while node is changed.

Publish tasks

DataWorks DataStudio does not support the do-while node. You must run the do-while node in Operation Center after you submit the node.

On the Date and Data_cleanse_SQL tabs, click Submit to submit tasks. Select all nodes when you submit tasks on the Data_cleanse_SQL tab.

Run tasks

  1. Go to Operation Center of DataWorks and choose Cycle Task Maintenance > Cycle Task. On the page that appears, view the tasks that you have submitted in the task list.
  2. Right-click the date set node and choose Add Data > Current and Downstream Nodes to manually run the two tasks.

    After you run the tasks, you can view the running status of each node.