In a manually triggered workflow, all nodes must be manually triggered, and cannot be scheduled. Therefore, you do not need to specify a parent node or output for a node in a manually triggered workflow.
Create a manually triggered workflow
GUI elements

No. | GUI element | Description |
---|---|---|
1 | Submit | Commit all nodes in the manually triggered workflow. |
2 | Run | Run all nodes in the manually triggered workflow. Nodes in this workflow do not have dependencies. Therefore, they can be run at the same time. |
3 | Stop | Stop running nodes. |
4 | Deploy | Go to the Deploy page. On this page, you can deploy specific or all nodes that are committed but not deployed to the production environment. |
5 | Go to Operation Center | Go to the Operation Center page. |
6 | Box | Draw a box to select required nodes to form a node group. |
7 | Refresh | Refresh the configuration tab of the manually triggered workflow. |
8 | Auto Layout | Automatically sort the nodes in the manually triggered workflow. |
9 | Zoom In | Zoom in the directed acyclic graph (DAG). |
10 | Zoom Out | Zoom out the DAG. |
11 | Query | Search for a node in the manually triggered workflow. |
12 | Toggle Full Screen View | Display nodes in the manually triggered workflow in the full screen. |
13 | Show Engine Information | Show or hide engine information. |
14 | Workflow Parameters | Set parameters. Workflow parameters have a higher priority than node parameters. If a parameter is set for both the workflow and a node, the parameter that is set for the manually triggered workflow takes effect. |
15 | Change History | View the operation records of all nodes in the manually triggered workflow. |
16 | Versions | View the deployment records of all nodes in the manually triggered workflow. |
Composition of a manually triggered workflow
- Data Integration
Double-click Data Integration under a manually triggered workflow to view all the data integration nodes in the workflow.
Right-click Data Integration and choose to create a batch sync node. For more information, see Batch Sync node.
- MaxComputeNotice The MaxCompute folder appears on the page only after you add a MaxCompute compute engine on the Workspace Management page. For more information, see Configure a workspace.The MaxCompute compute engine allows you to create data analytics nodes, such as ODPS SQL, SQL Snippet, ODPS Spark, PyODPS 2, ODPS Script, ODPS MR, and PyODPS 2 nodes. You can also create and view tables, resources, and functions.
- Data Analytics
Click MaxCompute under the manually triggered workflow and right-click Data Analytics to create a data analytics node. For more information, see Create an ODPS SQL node, Create an SQL component node, Create an ODPS Spark node, Create a PyODPS 2 node, Create an ODPS Script node, Create an ODPS MR node, and Create a PyODPS 3 node.
- Table
Click MaxCompute under the manually triggered workflow and right-click Table to create a table. You can also view all the tables that are created in the current MaxCompute compute engine. For more information, see Create a MaxCompute table.
- Resource
Click MaxCompute under the manually triggered workflow and right-click Resource to create a resource. You can also view all the resources that are created in the current MaxCompute compute engine. For more information, see Create, reference, and download MaxCompute resources.
- Function
Click MaxCompute under the manually triggered workflow and right-click Function to create a function. You can also view all the functions that are created in the current MaxCompute compute engine. For more information, see Create a MaxCompute function.
- Data Analytics
- EMRNotice The EMR folder appears on the page only after you add an E-MapReduce compute engine on the Workspace Management page. For more information, see Configure a workspace.The E-MapReduce compute engine allows you to create data analytics nodes, such as EMR Hive, EMR MR, EMR Spark SQL, EMR Spark, EMR Shell, EMR Spark Shell, EMR Presto, and EMR Impala nodes. You can also create and view E-MapReduce resources.
- Data Analytics
Click EMR under the manually triggered workflow and right-click Data Analytics to create a data analytics node. For more information, see Create an EMR Hive node, Create an EMR MR node, Create an EMR Spark SQL node, Create an EMR Spark node, and EMR Presto node.
- Resource
Click EMR under the manually triggered workflow and right-click Resource to create a resource. You can also view all the resources that are created in the current E-MapReduce compute engine.
- Function
Click EMR under the manually triggered workflow and right-click Function to create a function. You can also view all the resources that are created in the current E-MapReduce compute engine.
- Data Analytics
- Algorithm
Click the manually triggered workflow and right-click Algorithm to create an algorithm. You can also view all the Machine Learning experiment nodes that are created in the manually triggered workflow. For more information, see Create a Machine Learning (PAI) node.
- General
Click the manually triggered workflow and right-click General to create relevant nodes. For more information, see Create a Shell node and Create a zero-load node.
- UserDefined
Click the manually triggered workflow and right-click UserDefined to create relevant nodes. For more information, see Create a Data Lake Analytics node and Create an AnalyticDB for MySQL node.