An auto triggered instance is a snapshot that is automatically generated based on the scheduling configuration of an auto triggered task. You can view the details of an instance and perform related operations in a list or a directed acyclic graph (DAG).
Usage notes
Normal task: A task that runs code logic. It does not include dry-run tasks (such as tasks with the scheduling property set to dry-run, instances generated outside the scheduling time range, unselected branches of branch nodes, or expired instances from real-time tasks) or frozen tasks.
O&M environment: In a standard mode workspace, you can switch between the development Operation Center and the production Operation Center in the upper-left corner of the page. Tasks are not automatically scheduled in the development Operation Center. This means that no auto triggered instances are scheduled or run on the Auto Triggered Instances page.
Task execution and issue troubleshooting:
A scheduled task can run only if its upstream tasks run successfully, the scheduled time for the current task has arrived, scheduling resources are sufficient, and the current task is not frozen. For more information, see Task execution conditions.
If a task is not running, first use the Upstream Analysis feature in the DAG panel to quickly locate the key upstream tasks that are blocking the current task. Then, use the run diagnostics feature to diagnose why the key instances are not running or identify any existing issues. This feature is particularly useful for quickly locating issues and improving O&M efficiency when task dependencies are complex.
Limits
Version requirements:
The Run Diagnostics feature is available only in DataWorks Professional Edition or higher. You can try this feature for free. However, we recommend that you upgrade to the Professional Edition to access more features. For more information, see Intelligent diagnosis.
Only users of DataWorks Professional Edition or a more advanced edition can use the node aggregation, upstream analysis, and downstream analysis features in the DAG. For more information, see Billing for DataWorks editions.
Permission control:
Some features can be used only by users who have O&M permissions. If a feature is grayed out or not displayed, go to the page to check whether the target user has O&M permissions. For more information, see Go to the Management Center and Manage permissions for workspace-level modules.
Feature limitations:
You cannot manually delete auto triggered instances. The platform automatically deletes instances about 30 days after they expire. If a task no longer needs to run, you can freeze its instances.
For tasks that run on shared resource groups for scheduling, their instances are retained for one month (30 days) and their logs are retained for one week (7 days).
For tasks that run on Serverless resource groups or exclusive resource groups for scheduling, their instances and logs are retained for one month (30 days).
For instances that have finished running, the platform regularly clears logs every day if the log size exceeds 3 MB.
Precautions
Auto triggered tasks generate instances at scheduled times. Regardless of the instance generation method you choose, instances run tasks using the latest code in the production environment.
To monitor task execution, you must first set monitoring rules for the task. For more information, see Overview of intelligent monitoring. For tasks with alert monitoring configured, if a task fails but you do not receive an alert, check whether your mobile number and email address are configured on the Alert Contacts page. For more information, see Alert information.
The time when an auto triggered instance is first generated depends on the instance generation method you select. The methods include Generate On The Next Day (T+1) and Generate Immediately After Publishing. For more information, see Instance generation methods.
NoteManually rerunning a task does not trigger alerts for custom rules.
Go to the Auto Triggered Instances page
Go to the Operation Center page.
Log on to the DataWorks console. In the top navigation bar, select the desired region. In the left-side navigation pane, choose . On the page that appears, select the desired workspace from the drop-down list and click Go to Operation Center.
In the navigation pane on the left, click to go to the Auto Triggered Instances page.
On this page, you can view the running status of instances from different perspectives.
Instance perspective: View the running details of a single instance.
Workflow perspective: View the running overview of all instances in a workflow.
Instance perspective
View the instance list
Intelligent search mode
The intelligent search feature for auto triggered tasks lets you enter search content directly. The system automatically parses your query and filters the instance list.
Perform an intelligent search.
Click the Intelligent Search button in the filter box. In the dialog box that appears, enter the content you want to search for, such as
Sort by instance type, and press Enter. The system automatically matches and displays the relevant instances.Save the new view.
To save the search conditions for later use, click below the search bar. In the Save View dialog box, specify a custom View Name and click Save. You can then find and use this new view in the view search bar to search for instances.
NoteIf you no longer want to use this new view, find its name in the view search bar. Hover the mouse pointer over the view name, click the ... button on the right, and select Edit or Delete.
Close intelligent search.
To perform an exact search using filter conditions, press the Esc key or click the Close Intelligent Search button in the instance toolbar to exit the intelligent search mode.
Condition-based filtering mode
The condition-based filtering feature lets you accurately filter the instance list by specifying filter conditions.
Simple condition filtering.
In the toolbar, you can select multiple filter conditions, such as Task Name/Task ID/Instance ID, Schedule Resource Group, Alerts Generated In The Last 24 Hours, and Waiting For Resources For A Long Time, to filter the instance list.
Complex condition filtering.
Click the Filter button in the filter box. You can then combine multiple conditions, such as Task Name/Task ID/Instance ID, Schedule Resource Group, Runtime, and Computing Resource Name, to accurately find the required instances.
Operate on auto triggered instances
Single instance operations
To perform an operation on an auto triggered instance, find the instance in the list and use the corresponding feature in the Actions column. The available features are described as follows:
Feature | Description | |
DAG | Displays the upstream and downstream dependencies of the auto triggered instance. You can perform related operations in the DAG. For more information, see Appendix: Introduction to DAG features. | |
Run Diagnostics | Performs end-to-end analysis on a task. When a task does not run as expected, use this feature to locate the issue. For more information, see Intelligent diagnosis. | |
Rerun | Reruns a task that is in the Successful or Failed state. After the task runs successfully, it can trigger the scheduling of downstream tasks that have not run. This is often used to handle error nodes and missed nodes. | |
More | Rerun Descendant Nodes | Reruns the descendant nodes of a task that is in the Successful or Failed state. You can select the downstream tasks to rerun. After a task runs successfully, it can trigger the scheduling of downstream tasks that have not run. This is often used for data fixing. |
Set To Success | Sets the status of a failed task to successful. Use this feature when a task fails, but you do not want it to block its downstream tasks. This is often used to handle error nodes. | |
Stop | Stops a task that does not need to run. After being stopped, the task exits with a failed status. You can stop only instances that are in the Waiting For Time, Waiting For Resources, or Running state. | |
Pause (Freeze) | Use this feature when the current instance and its downstream instances do not need to run. Freezing an auto triggered instance affects only the current instance. A frozen auto triggered instance is not automatically scheduled to run (it does not actually run data) and blocks its downstream nodes from running (downstream tasks are not automatically scheduled). Note
| |
Resume (Unfreeze) | Unfreezes a frozen instance.
Note The unfreeze operation affects only the current instance. If the auto triggered task is still frozen, the instance generated the next day will also be frozen. | |
View Lineage | View the data lineage of the current instance. | |
View Auto Triggered Task Details | View the basic information of the current instance. | |
View Runtime Log | After a task starts to run, you can view its detailed execution process in the runtime log. For a description of the core parameters in the log, see Appendix: Description of runtime log parameters. | |
Modify Schedule Resource Group | Modifies the schedule resource group used to run the current instance. This operation does not change the resource group of the auto triggered task to which the instance belongs. | |
Batch instance operations
To perform batch operations on auto triggered instances, select the required instances in the list. Then, at the bottom of the list, perform batch operations such as Stop, Rerun, Set To Success, Modify Resource Group, Pause (Freeze), or Resume (Unfreeze).
View the DAG of an auto triggered task
Click the DAG button in the Actions column of an auto triggered instance to go to the DAG details page of the instance.
DAG panel features
On the instance DAG details page, you can use the following features in the DAG panel to aggregate nodes, analyze downstream dependencies, and adjust the DAG display style.
Feature | Description | |
| As needed, click these feature icons in the upper-left corner to aggregate instance information by the following dimensions.
| |
| When the number or level of auto triggered tasks is large, you can use the Upstream Analysis and Downstream Analysis features to count the number of upstream and downstream tasks affected by the current task. | |
| As needed, click these feature icons in the upper-right corner to adjust the display style of the DAG. | |
DAG operations
On the instance DAG details page, right-click an instance in the flow to view its upstream and downstream relationships, code details, and other related information. The specific operations are as follows:
Expand Parent Nodes: View the upstream tasks of the current node to understand which nodes affect its data output. You can expand parent nodes by level, up to six levels at a time.
Expand Child Nodes: View the downstream tasks of the current node to understand which nodes are affected by its data output. You can expand child nodes by level, up to six levels at a time.
View Runtime Log: After a task starts to run, you can view its detailed execution process in the runtime log. For a description of the core parameters in the log, see Appendix: Description of runtime log parameters.
Run Diagnostics: Diagnose upstream dependencies, time scheduling, scheduling resources, and task running status.
View Code: Confirm the current code of the node in the production environment. If the code is not as expected, check whether the latest version of the node has been successfully published.
Edit Node: Go to the Data Development page and open the current node.
View Lineage: View the data lineage of the current instance.
View More Details: View the basic properties, operation logs, and task code of the instance.
View Auto Triggered Task: View information about the auto triggered task to which the current instance belongs.
Go to Task 360: In the Data Governance center, you can obtain a panoramic view of a task from multiple dimensions, such as its associated baselines and instance running status, and perform task administration. For more information, see Get a panoramic view of a task.
Stop: Stop a task that does not need to run. After being stopped, the task exits with a failed status. You can stop only instances that are in the Waiting For Time, Waiting For Resources, or Running state.
Rerun: Rerun a task that is in the Successful or Failed state. After the task runs successfully, it can trigger the scheduling of downstream tasks that have not run. This is often used to handle error nodes and missed nodes.
Rerun Descendant Nodes: Rerun the descendant nodes of a task that is in the Successful or Failed state. You can select the downstream tasks to rerun. After a task runs successfully, it can trigger the scheduling of downstream tasks that have not run. This is often used for data fixing.
Set To Success: Set the status of a failed task to successful. Use this feature when a task fails, but you do not want it to block its downstream tasks. This is often used to handle error nodes.
Resume: Resume a failed task from the point of failure. If a task contains multiple SQL statements, it can be resumed from the specific SQL statement that failed.
NoteThis operation is supported only for MaxCompute SQL tasks.
Trigger DQC Check: If the task is configured with data quality rules, you can trigger a check of those rules.
Emergency Operation: An emergency operation is effective only for the current run of the current node.
Remove Dependency: Remove a dependency for a specified task. Use this feature to remove the dependency relationship of the current node. This is often used to urgently remove an upstream dependency when an upstream task fails and has no data relationship with the current instance, allowing the current task to run.
NoteConfirm whether this operation affects data based on the task code and data lineage.
Modify Priority: The priority of an instance is inherited from the priority of the baseline to which the instance belongs. You can reset the priority here as needed. A larger value indicates a higher priority.
Force Rerun: Forcibly rerun the current node. This operation is supported for auto triggered instances that are successful, failed, or have not run. It is often used for data fixing.
Force Rerun Descendant Nodes: Rerun data for the data timestamps of yesterday and the day before yesterday. This operation is supported only for auto triggered instances that are successful or failed. It is often used for data fixing. For more information, see Appendix: Force rerun descendant nodes.
NoteOnly workspace administrators, tenant administrators, and Alibaba Cloud accounts can initiate this operation.
Clone Instance: Create a new instance (clone instance) with the same configuration based on the current instance that is in the Running state (host instance). The new instance is named in the format
dw_clone_Node name.NoteYou can clone only ODPS SQL node instances. Each instance can be cloned only once.
Execution logic for host and clone instances:
Both the host and clone instances are in the running state. If the host instance succeeds first, the clone instance is stopped. If the clone instance succeeds first, the host instance is stopped and its status is set to successful.
If a downstream task of the current task has a clone instance, rerunning the downstream tasks does not trigger the execution of the clone instance.
Pause (Freeze): Freeze an instance if it and its downstream instances do not need to run.
ImportantA frozen upstream instance blocks its downstream instances. Use this operation with caution.
Do not operate on the projectname_root node. This is the root node of the workspace. All instances of auto triggered tasks depend on this node. If you freeze this node, the instances cannot run.
You cannot freeze an instance that is in the Waiting For Resources, Waiting For Time, or Running state (for example, the node code is running or data quality is being checked).
Resume (Unfreeze): Unfreeze a frozen instance.
If the instance has not run yet, it runs automatically after its upstream tasks are complete.
If all upstream tasks are complete, the task is directly set to failed. You must manually rerun the instance for it to run normally.
NoteThe unfreeze operation affects only the current instance. If the auto triggered task is still frozen, the instance generated the next day will also be frozen.
View instance details
In the instance DAG, left-click a specific instance. In the window that appears, click View Log, or Expand Details to view detailed information such as Properties, Runtime Log, Operation Log, and Code.
Feature | Description |
Properties | On this tab, you can view the scheduling properties of the task in the production environment. For more information about the parameters on the interface, see: Scheduling configuration.
|
Runtime Log | After a task starts to run, you can view its detailed execution process in the runtime log. For a description of the core parameters in the log, see Appendix: Description of runtime log parameters. |
Operation Log | View the operation records (time, operator, and specific operation) for a task or instance. |
Code | View the latest code of the current task in the production environment. If the code is not as expected, check whether the latest version of the task has been successfully published. For more information, see Publish a task. |
Workflow perspective
Click the Workflow Perspective tab on the Auto Triggered Instances page to go to the workflow perspective interface.
In the workflow perspective, only dependencies within the workflow are displayed. If dependencies exist across workflows or workspaces, you must switch to the Instance perspective to view them.
View the workflow list
Feature | Description |
Workflow running status overview | The Workflow column uses visual icons to show the running status of the workflow. The list mode shows statistics for normal tasks, which excludes dry-run and frozen tasks. The DAG panel displays all types of tasks.
|
Workflow O&M Operations | You can perform the following operations on a workflow:
|
Appendix: Description of runtime log parameters
After a task starts to run, you can view its running details in the runtime log. The core parameters in the log are described as follows.
Parameter | Description |
SKYNET_ONDUTY | Task owner. |
SKYNET_PARAVALUE | List of scheduling parameters. |
SKYNET_TASKID | Instance ID. |
SKYNET_ID | Node ID. |
SKYNET_NODENAME | Node name. |
SKYNET_APPNAME | Workspace name. |
SKYNET_REGION | Region where the workspace is located. |
SKYNET_CYCTIME | Scheduled runtime of the node. |
FAQ
Instance exception troubleshooting
Troubleshooting for tasks that are not running
Dry-run
For more frequently asked questions, see FAQ summary.



: The number of running instances in the current workflow.
: The number of successful instances in the current workflow.
: The number of failed instances in the current workflow.
: The number of instances in other states in the current workflow.