All Products
Search
Document Center

DataWorks:View auto triggered instances

Last Updated:Aug 14, 2025

An auto triggered instance is a snapshot that is automatically generated based on the scheduling configuration of an auto triggered task. You can view the details of an instance and perform related operations in a list or a directed acyclic graph (DAG).

Usage notes

  • Normal task: A task that runs code logic. It does not include dry-run tasks (such as tasks with the scheduling property set to dry-run, instances generated outside the scheduling time range, unselected branches of branch nodes, or expired instances from real-time tasks) or frozen tasks.

  • O&M environment: In a standard mode workspace, you can switch between the development Operation Center and the production Operation Center in the upper-left corner of the page. Tasks are not automatically scheduled in the development Operation Center. This means that no auto triggered instances are scheduled or run on the Auto Triggered Instances page.

  • Task execution and issue troubleshooting:

    • A scheduled task can run only if its upstream tasks run successfully, the scheduled time for the current task has arrived, scheduling resources are sufficient, and the current task is not frozen. For more information, see Task execution conditions.

    • If a task is not running, first use the Upstream Analysis feature in the DAG panel to quickly locate the key upstream tasks that are blocking the current task. Then, use the run diagnostics feature to diagnose why the key instances are not running or identify any existing issues. This feature is particularly useful for quickly locating issues and improving O&M efficiency when task dependencies are complex.

Limits

  • Version requirements:

    • The Run Diagnostics feature is available only in DataWorks Professional Edition or higher. You can try this feature for free. However, we recommend that you upgrade to the Professional Edition to access more features. For more information, see Intelligent diagnosis.

    • Only users of DataWorks Professional Edition or a more advanced edition can use the node aggregation, upstream analysis, and downstream analysis features in the DAG. For more information, see Billing for DataWorks editions.

  • Permission control:

    Some features can be used only by users who have O&M permissions. If a feature is grayed out or not displayed, go to the Management Center > Workspace > Workspace Members page to check whether the target user has O&M permissions. For more information, see Go to the Management Center and Manage permissions for workspace-level modules.

  • Feature limitations:

    • You cannot manually delete auto triggered instances. The platform automatically deletes instances about 30 days after they expire. If a task no longer needs to run, you can freeze its instances.

    • For tasks that run on shared resource groups for scheduling, their instances are retained for one month (30 days) and their logs are retained for one week (7 days).

    • For tasks that run on Serverless resource groups or exclusive resource groups for scheduling, their instances and logs are retained for one month (30 days).

    • For instances that have finished running, the platform regularly clears logs every day if the log size exceeds 3 MB.

Precautions

  • Auto triggered tasks generate instances at scheduled times. Regardless of the instance generation method you choose, instances run tasks using the latest code in the production environment.

  • To monitor task execution, you must first set monitoring rules for the task. For more information, see Overview of intelligent monitoring. For tasks with alert monitoring configured, if a task fails but you do not receive an alert, check whether your mobile number and email address are configured on the Alert Contacts page. For more information, see Alert information.

  • The time when an auto triggered instance is first generated depends on the instance generation method you select. The methods include Generate On The Next Day (T+1) and Generate Immediately After Publishing. For more information, see Instance generation methods.

    Note

    Manually rerunning a task does not trigger alerts for custom rules.

Go to the Auto Triggered Instances page

  1. Go to the Operation Center page.

    Log on to the DataWorks console. In the top navigation bar, select the desired region. In the left-side navigation pane, choose Data Development and O&M > Operation Center. On the page that appears, select the desired workspace from the drop-down list and click Go to Operation Center.

  2. In the navigation pane on the left, click Auto Triggered Task O&M > Auto Triggered Instances to go to the Auto Triggered Instances page.

    On this page, you can view the running status of instances from different perspectives.

Instance perspective

View the instance list

Intelligent search mode

The intelligent search feature for auto triggered tasks lets you enter search content directly. The system automatically parses your query and filters the instance list.

  1. Perform an intelligent search.

    Click the Intelligent Search button in the filter box. In the dialog box that appears, enter the content you want to search for, such as Sort by instance type, and press Enter. The system automatically matches and displays the relevant instances.

  2. Save the new view.

    To save the search conditions for later use, click Unsaved View > Save As New View below the search bar. In the Save View dialog box, specify a custom View Name and click Save. You can then find and use this new view in the view search bar to search for instances.

    Note

    If you no longer want to use this new view, find its name in the view search bar. Hover the mouse pointer over the view name, click the ... button on the right, and select Edit or Delete.

  3. Close intelligent search.

    To perform an exact search using filter conditions, press the Esc key or click the Close Intelligent Search button in the instance toolbar to exit the intelligent search mode.

Condition-based filtering mode

The condition-based filtering feature lets you accurately filter the instance list by specifying filter conditions.

  1. Simple condition filtering.

    In the toolbar, you can select multiple filter conditions, such as Task Name/Task ID/Instance ID, Schedule Resource Group, Alerts Generated In The Last 24 Hours, and Waiting For Resources For A Long Time, to filter the instance list.

  2. Complex condition filtering.

    Click the Filter button in the filter box. You can then combine multiple conditions, such as Task Name/Task ID/Instance ID, Schedule Resource Group, Runtime, and Computing Resource Name, to accurately find the required instances.

Operate on auto triggered instances

Single instance operations

To perform an operation on an auto triggered instance, find the instance in the list and use the corresponding feature in the Actions column. The available features are described as follows:

Feature

Description

DAG

Displays the upstream and downstream dependencies of the auto triggered instance. You can perform related operations in the DAG. For more information, see Appendix: Introduction to DAG features.

Run Diagnostics

Performs end-to-end analysis on a task. When a task does not run as expected, use this feature to locate the issue. For more information, see Intelligent diagnosis.

Rerun

Reruns a task that is in the Successful or Failed state. After the task runs successfully, it can trigger the scheduling of downstream tasks that have not run. This is often used to handle error nodes and missed nodes.

More

Rerun Descendant Nodes

Reruns the descendant nodes of a task that is in the Successful or Failed state. You can select the downstream tasks to rerun. After a task runs successfully, it can trigger the scheduling of downstream tasks that have not run. This is often used for data fixing.

Set To Success

Sets the status of a failed task to successful. Use this feature when a task fails, but you do not want it to block its downstream tasks. This is often used to handle error nodes.

Stop

Stops a task that does not need to run. After being stopped, the task exits with a failed status. You can stop only instances that are in the Waiting For Time, Waiting For Resources, or Running state.

Pause (Freeze)

Use this feature when the current instance and its downstream instances do not need to run. Freezing an auto triggered instance affects only the current instance. A frozen auto triggered instance is not automatically scheduled to run (it does not actually run data) and blocks its downstream nodes from running (downstream tasks are not automatically scheduled).

Note
  • Do not operate on the projectname_root node. This is the root node of the workspace. All instances of auto triggered tasks depend on this node. If you freeze this node, the instances cannot run.

  • You cannot freeze an instance that is in the Waiting For Resources, Waiting For Time, or Running state (for example, the node code is running or data quality is being checked).

Resume (Unfreeze)

Unfreezes a frozen instance.

  • If the instance has not run yet, it runs automatically after its upstream tasks are complete.

  • If all upstream tasks are complete, the task is directly set to failed. You must manually rerun the instance for it to run normally.

Note

The unfreeze operation affects only the current instance. If the auto triggered task is still frozen, the instance generated the next day will also be frozen.

View Lineage

View the data lineage of the current instance.

View Auto Triggered Task Details

View the basic information of the current instance.

View Runtime Log

After a task starts to run, you can view its detailed execution process in the runtime log. For a description of the core parameters in the log, see Appendix: Description of runtime log parameters.

Modify Schedule Resource Group

Modifies the schedule resource group used to run the current instance. This operation does not change the resource group of the auto triggered task to which the instance belongs.

Batch instance operations

To perform batch operations on auto triggered instances, select the required instances in the list. Then, at the bottom of the list, perform batch operations such as Stop, Rerun, Set To Success, Modify Resource Group, Pause (Freeze), or Resume (Unfreeze).

View the DAG of an auto triggered task

Click the DAG button in the Actions column of an auto triggered instance to go to the DAG details page of the instance.

DAG panel features

On the instance DAG details page, you can use the following features in the DAG panel to aggregate nodes, analyze downstream dependencies, and adjust the DAG display style.

Feature

Description

image

As needed, click these feature icons in the upper-left corner to aggregate instance information by the following dimensions.

  • Do Not Aggregate.

  • Aggregate By Workspace.

  • Aggregate By Owner.

  • Aggregate By Priority.

image

When the number or level of auto triggered tasks is large, you can use the Upstream Analysis and Downstream Analysis features to count the number of upstream and downstream tasks affected by the current task.

image

As needed, click these feature icons in the upper-right corner to adjust the display style of the DAG.

DAG operations

On the instance DAG details page, right-click an instance in the flow to view its upstream and downstream relationships, code details, and other related information. The specific operations are as follows:

  • Expand Parent Nodes: View the upstream tasks of the current node to understand which nodes affect its data output. You can expand parent nodes by level, up to six levels at a time.

  • Expand Child Nodes: View the downstream tasks of the current node to understand which nodes are affected by its data output. You can expand child nodes by level, up to six levels at a time.

  • View Runtime Log: After a task starts to run, you can view its detailed execution process in the runtime log. For a description of the core parameters in the log, see Appendix: Description of runtime log parameters.

  • Run Diagnostics: Diagnose upstream dependencies, time scheduling, scheduling resources, and task running status.

  • View Code: Confirm the current code of the node in the production environment. If the code is not as expected, check whether the latest version of the node has been successfully published.

  • Edit Node: Go to the Data Development page and open the current node.

  • View Lineage: View the data lineage of the current instance.

  • View More Details: View the basic properties, operation logs, and task code of the instance.

  • View Auto Triggered Task: View information about the auto triggered task to which the current instance belongs.

  • Go to Task 360: In the Data Governance center, you can obtain a panoramic view of a task from multiple dimensions, such as its associated baselines and instance running status, and perform task administration. For more information, see Get a panoramic view of a task.

  • Stop: Stop a task that does not need to run. After being stopped, the task exits with a failed status. You can stop only instances that are in the Waiting For Time, Waiting For Resources, or Running state.

  • Rerun: Rerun a task that is in the Successful or Failed state. After the task runs successfully, it can trigger the scheduling of downstream tasks that have not run. This is often used to handle error nodes and missed nodes.

  • Rerun Descendant Nodes: Rerun the descendant nodes of a task that is in the Successful or Failed state. You can select the downstream tasks to rerun. After a task runs successfully, it can trigger the scheduling of downstream tasks that have not run. This is often used for data fixing.

  • Set To Success: Set the status of a failed task to successful. Use this feature when a task fails, but you do not want it to block its downstream tasks. This is often used to handle error nodes.

  • Resume: Resume a failed task from the point of failure. If a task contains multiple SQL statements, it can be resumed from the specific SQL statement that failed.

    Note

    This operation is supported only for MaxCompute SQL tasks.

  • Trigger DQC Check: If the task is configured with data quality rules, you can trigger a check of those rules.

  • Emergency Operation: An emergency operation is effective only for the current run of the current node.

    • Remove Dependency: Remove a dependency for a specified task. Use this feature to remove the dependency relationship of the current node. This is often used to urgently remove an upstream dependency when an upstream task fails and has no data relationship with the current instance, allowing the current task to run.

      Note

      Confirm whether this operation affects data based on the task code and data lineage.

    • Modify Priority: The priority of an instance is inherited from the priority of the baseline to which the instance belongs. You can reset the priority here as needed. A larger value indicates a higher priority.

    • Force Rerun: Forcibly rerun the current node. This operation is supported for auto triggered instances that are successful, failed, or have not run. It is often used for data fixing.

    • Force Rerun Descendant Nodes: Rerun data for the data timestamps of yesterday and the day before yesterday. This operation is supported only for auto triggered instances that are successful or failed. It is often used for data fixing. For more information, see Appendix: Force rerun descendant nodes.

      Note

      Only workspace administrators, tenant administrators, and Alibaba Cloud accounts can initiate this operation.

    • Clone Instance: Create a new instance (clone instance) with the same configuration based on the current instance that is in the Running state (host instance). The new instance is named in the format dw_clone_Node name.

      Note
      • You can clone only ODPS SQL node instances. Each instance can be cloned only once.

      • Execution logic for host and clone instances:

        • Both the host and clone instances are in the running state. If the host instance succeeds first, the clone instance is stopped. If the clone instance succeeds first, the host instance is stopped and its status is set to successful.

        • If a downstream task of the current task has a clone instance, rerunning the downstream tasks does not trigger the execution of the clone instance.

  • Pause (Freeze): Freeze an instance if it and its downstream instances do not need to run.

    Important
    • A frozen upstream instance blocks its downstream instances. Use this operation with caution.

    • Do not operate on the projectname_root node. This is the root node of the workspace. All instances of auto triggered tasks depend on this node. If you freeze this node, the instances cannot run.

    • You cannot freeze an instance that is in the Waiting For Resources, Waiting For Time, or Running state (for example, the node code is running or data quality is being checked).

  • Resume (Unfreeze): Unfreeze a frozen instance.

    • If the instance has not run yet, it runs automatically after its upstream tasks are complete.

    • If all upstream tasks are complete, the task is directly set to failed. You must manually rerun the instance for it to run normally.

    Note

    The unfreeze operation affects only the current instance. If the auto triggered task is still frozen, the instance generated the next day will also be frozen.

View instance details

In the instance DAG, left-click a specific instance. In the window that appears, click View Log, or Expand Details to view detailed information such as Properties, Runtime Log, Operation Log, and Code.

Feature

Description

Properties

On this tab, you can view the scheduling properties of the task in the production environment. For more information about the parameters on the interface, see: Scheduling configuration.

  • Relationship between node ID and instance ID:

    For tasks scheduled by hour or minute, you can use the node ID to locate all hourly or minute instances generated for that node on the current day. To locate a specific hourly or minute instance, use the instance ID for an exact search.

  • Task status: The task status is related to task execution. If a task is in a state such as not run, waiting for time, waiting for resources, or frozen, you can use Run Diagnostics to quickly locate the issue.

  • Time waiting for resources: When a task waits for resources for a long time, use the Run Diagnostics feature to identify which tasks are occupying resources when the current task is running. This helps you quickly find and troubleshoot abnormal tasks.

  • Runtime: If a task's runtime is significantly longer than its historical average, handle it based on the following scenarios.

    • Non-sync tasks: Consult the owner of the corresponding DPI engine.

    • Offline sync tasks: A certain stage of the task may be slow, or the task may be waiting for resources for a long time. For more information, see FAQ about offline sync tasks.

  • Rule monitoring: You can view the monitoring rules associated with the current instance. You can click Create on the right to quickly create a monitoring rule for the task running status. For more information, see Rule management.

    Note

    Here, you can only view the details of rules for monitoring task running status. You cannot view data quality monitoring rules.

  • Baseline monitoring: You can view the baselines associated with the current instance. You can click Create on the right to quickly create a baseline. For more information, see Baseline management.

  • Labels: This section displays the custom labels you defined in Tag Management. If the current node has issues that need to be addressed, they are also displayed as labels. You can go to the Data Governance center to view details.

Runtime Log

After a task starts to run, you can view its detailed execution process in the runtime log. For a description of the core parameters in the log, see Appendix: Description of runtime log parameters.

Operation Log

View the operation records (time, operator, and specific operation) for a task or instance.

Code

View the latest code of the current task in the production environment. If the code is not as expected, check whether the latest version of the task has been successfully published. For more information, see Publish a task.

Workflow perspective

Click the Workflow Perspective tab on the Auto Triggered Instances page to go to the workflow perspective interface.

Note

In the workflow perspective, only dependencies within the workflow are displayed. If dependencies exist across workflows or workspaces, you must switch to the Instance perspective to view them.

View the workflow list

Feature

Description

Workflow running status overview

The Workflow column uses visual icons to show the running status of the workflow. The list mode shows statistics for normal tasks, which excludes dry-run and frozen tasks. The DAG panel displays all types of tasks.

  • 运行中: The number of running instances in the current workflow.

  • 成功: The number of successful instances in the current workflow.

  • 失败: The number of failed instances in the current workflow.

  • 其他: The number of instances in other states in the current workflow.

Workflow O&M Operations

You can perform the following operations on a workflow:

  • DAG: View the workflow's DAG. In the workflow perspective, hourly and minute tasks in the workflow are grouped by default. Operations on a single instance in the workflow perspective are the same as in the instance perspective. For more information, see DAG panel features.

  • Rerun: Rerun all or specified tasks in the current workflow.

  • Stop: Stop the currently running workflow.

  • Freeze: Freeze the execution of the current workflow. After freezing, instances in the workflow will not run.

  • Unfreeze: Unfreeze a frozen workflow. After unfreezing, the workflow defaults to a failed state. You can rerun the workflow.

  • Set To Success: Set the current workflow to successful. After this, the nodes in the workflow will show a successful status.

Appendix: Description of runtime log parameters

After a task starts to run, you can view its running details in the runtime log. The core parameters in the log are described as follows.

Parameter

Description

SKYNET_ONDUTY

Task owner.

SKYNET_PARAVALUE

List of scheduling parameters.

SKYNET_TASKID

Instance ID.

SKYNET_ID

Node ID.

SKYNET_NODENAME

Node name.

SKYNET_APPNAME

Workspace name.

SKYNET_REGION

Region where the workspace is located.

SKYNET_CYCTIME

Scheduled runtime of the node.

FAQ

For more frequently asked questions, see FAQ summary.