In a manually triggered workflow, all nodes must be manually triggered, and cannot be automatically scheduled by DataWorks. Therefore, you do not need to specify parent nodes or outputs for nodes in manually triggered workflows.

Wizard

The following table describes icons and tabs on the Manually Triggered Workflow page.
No. Function Description
1 Submit Commit all nodes in the current manually triggered workflow.
2 Run Run all nodes in the current manually triggered workflow. Nodes in this workflow do not have dependencies, and therefore they can run at the same time.
3 Stop Stop all running nodes in the current manually triggered workflow.
4 Deploy Redirect to the Deploy page. On this page, you can deploy some or all nodes that are committed but not deployed to the production environment.
5 Go to Operation Center Redirect to the Operation Center page.
6 Box Box-select a node group consisting of required nodes.
7 Refresh Refresh the page of the current manually triggered workflow.
8 Auto Layout Sort the nodes in the current manually triggered workflow.
9 Zoom In Zoom in the current page.
10 Zoom Out Zoom out the current page.
11 Query Search for a node in the current manually triggered workflow.
12 Toggle Full Screen Mode Display nodes in the current manually triggered workflow in the full screen.
13 Show Engine Information Show or hide engine information.
14 Workflow Parameters Set parameters. Parameters configured on this page have a higher priority than those specified on the page for creating a node. If a value is set on both pages, the value configured on the Workflow Parameters page takes effect.
15 Change History View the operation records of all nodes in the current manually triggered workflow.
16 Versions View the deployment records of all nodes in the current manually triggered workflow.

Create a manually triggered workflow

  1. Log on to the DataWorks console. In the left-side navigation pane, click Workspaces. On the Workspaces page, find the target workspace and click Data Analytics in the Actions column.
  2. In the left-side navigation page, click Manually Triggered Workflow to show the Manually Triggered Workflow tab.

    Click Show the left-side navigation pane in the lower-left corner to show or hide the left-side navigation pane.

  3. Right-click Manual Business Flow and select Create Workflow.
  4. In the Create Workflow dialog box, set the Workflow Name and Description parameters.
  5. Click Create.

Composition of a manually triggered workflow

Note We recommend that you create a maximum of 100 nodes in a single manually triggered workflow.

A manually triggered workflow consists of the nodes of the following modules. After you create a manually triggered workflow, open this workflow and create nodes of various types for each module. For more information, see Batch Sync node.

  • Data Integration

    Double-click Data Integration under the created workflow to view all the data integration nodes.

    Right-click Data Integration and choose Create > Batch Synchronization to create a batch synchronization node. For more information, see Batch Sync node.

  • MaxCompute

    The MaxCompute engine consists of data analytics nodes, such as ODPS SQL, SQL Snippet, ODPS Spark, PyODPS, ODPS Script, and ODPS MR nodes. You can also view and create tables, resources, and functions.

    • Data Analytics

      Show MaxCompute under the created workflow and right-click Data Analytics to create a data analytics node. For more information, see ODPS SQL node, SQL script template, ODPS Spark node, PyODPS node, ODPS Script node, and ODPS MR node.

    • Table

      Show MaxCompute under the created workflow and right-click Table to create a table. You can also view all the tables created for the current MaxCompute engine. For more information, see Table.

    • Resource

      Show MaxCompute under the created workflow and right-click Resource to create a resource. You can also view all the resources created for the current MaxCompute engine. For more information, see Resource.

    • Function

      Show MaxCompute under the created workflow and right-click Function to create a function. You can also view all the functions created for the current MaxCompute engine. For more information, see Function.

  • EMR

    The E-MapReduce compute engine consists of data analytics nodes, such as EMR Hive, EMR MR, EMR Spark, and EMR Spark SQL nodes. You can also view and create E-MapReduce resources.

    Note The EMR folder is available only after you create an E-MapReduce compute engine. For more information, see Configure a workspace.
    • Data Analytics

      Show EMR under the created workflow and right-click Data Analytics to create a data analytics node. For more information, see EMR HIVE node, EMR MR node, EMR SPARK SQL node, and EMR SPARK node.

    • Resource

      Show EMR under the created workflow and right-click Resource to create a resource. You can also view all the resources created for the current E-MapReduce compute engine. For more information, see Resource.

  • Algorithm

    Open the created workflow and right-click Algorithm to create an algorithm. You can also view all the Machine Learning Platform for AI nodes created in the current manually triggered workflow. For more information, see Machine Learning experiment node.

  • General

    Open the created workflow and right-click General to create relevant nodes. For more information, see Shell node and Zero-load node.

  • UserDefined

    Open the created workflow and right-click UserDefined to create relevant nodes. For more information, see Data Lake Analytics node, AnalyticDB for MySQL node, and AnalyticDB for PostgreSQL node.