In a manually triggered workflow, all nodes must be manually triggered, and cannot be scheduled. Therefore, you do not need to specify a parent node or output for a node in a manually triggered workflow.

Create a manually triggered workflow

  1. Go to the DataStudio page.
    1. Log on to the DataWorks console.
    2. In the left-side navigation pane, click Workspaces.
    3. In the top navigation bar, select the region in which the workspace that you want to manage resides. Find the workspace and click DataStudio in the Actions column.
  2. On the left-side navigation submenu, click the Manually Triggered Workflows icon.
    You can click the More icon in the lower-left corner to show or hide the left-side navigation submenu.
  3. Right-click Manually Triggered Workflows and select Create Workflow.
  4. In the Create Workflow dialog box, set the Workflow Name and Description parameters.
    Notice The workflow name must be 1 to 128 characters in length and can contain letters, digits, underscores (_), and periods (.).
  5. Click Create.

GUI elements

User interface
The following table describes the GUI elements on the configuration tab of a manually triggered workflow.
No. GUI element Description
1 Submit Commit all nodes in the manually triggered workflow.
2 Run Run all nodes in the manually triggered workflow. Nodes in this workflow do not have dependencies. Therefore, they can be run at the same time.
3 Stop Stop running nodes.
4 Deploy Go to the Deploy page. On this page, you can deploy specific or all nodes that are committed but not deployed to the production environment.
5 Go to Operation Center ou can click this icon to go to Operation Center to view the O&M details of nodes.
6 Switch Layout You can click this icon to switch the layout of the canvas to Vertical Horizontal or Grid.
7 Box Draw a box to select required nodes to form a node group.
8 Refresh Refresh the configuration tab of the manually triggered workflow.
9 Format You can click this icon to horizontally align the nodes on the canvas.
10 Adapt You can click this icon to adapt the current workflow layout to the size of the canvas.
11 Center You can click this icon to center nodes on the canvas.
12 1:1 You can click this icon to adjust the scale of the directed acyclic graph (DAG) of nodes to 100%.
13 Zoom In Zoom in the directed acyclic graph (DAG).
14 Zoom Out Zoom out the DAG.
15 Search You can click this icon and enter a keyword in the search box to search for a node whose name contains the keyword.
16 Toggle Full Screen View Display nodes in the manually triggered workflow in the full screen.
17 Hide Engine Information Show or hide engine information.
18 Workflow Parameters Set parameters for the manually triggered workflow. Parameters set on this tab takes priority over those set on a node configuration tab. If the same parameter is set for both the workflow and a node, the parameter setting on the Workflow Parameters tab takes effect.
19 Change History View the operation records of all nodes in the manually triggered workflow.
20 Versions View the deployment records of all nodes in the manually triggered workflow.

Composition of a manually triggered workflow

Note We recommend that you create no more than 100 nodes in a manually triggered workflow.
A manually triggered workflow can contain a variety of nodes. After you create a manually triggered workflow based on the preceding procedure, you can create nodes in the workflow. For more information, see Manage workflows.
  • Data Integration

    Double-click Data Integration under a manually triggered workflow to view all the data integration nodes in the workflow.

    Right-click Data Integration and choose Create > Batch Synchronization to create a batch sync node. For more information, see Batch Sync node.

  • MaxCompute
    Notice The MaxCompute folder appears on the page only after you add a MaxCompute compute engine on the Workspace Management page. For more information, see Configure a workspace.
    The MaxCompute compute engine allows you to create data analytics nodes, such as ODPS SQL, SQL Snippet, ODPS Spark, PyODPS 2, ODPS Script, ODPS MR, and PyODPS 2 nodes. You can also create and view tables, resources, and functions.
  • EMR
    Notice The EMR folder appears on the page only after you add an E-MapReduce compute engine on the Workspace Management page. For more information, see Configure a workspace.
    The E-MapReduce compute engine allows you to create data analytics nodes, such as EMR Hive, EMR MR, EMR Spark SQL, EMR Spark, EMR Shell, EMR Spark Shell, EMR Presto, and EMR Impala nodes. You can also create and view E-MapReduce resources.
    • Data Analytics

      Click EMR under the manually triggered workflow and right-click Data Analytics to create a data analytics node. For more information, see Create an EMR Hive node, Create and use an EMR MR node, Create an EMR Spark SQL node, Create an EMR Spark node, and Create an EMR Presto node.

    • Resource

      Click EMR under the manually triggered workflow and right-click Resource to create a resource. You can also view all the resources that are created in the current E-MapReduce compute engine.

    • Function

      Click EMR under the manually triggered workflow and right-click Function to create a function. You can also view all the resources that are created in the current E-MapReduce compute engine.

  • Algorithm

    Click the manually triggered workflow and right-click Algorithm to create an algorithm. You can also view all the Machine Learning experiment nodes that are created in the manually triggered workflow. For more information, see Create a Machine Learning (PAI) node.

  • General

    Click the manually triggered workflow and right-click General to create relevant nodes. For more information, see Create a Shell node and Create a zero-load node.

  • UserDefined

    Click the manually triggered workflow and right-click UserDefined to create relevant nodes. For more information, see Create a Data Lake Analytics node and Create an AnalyticDB for MySQL node.