All Products
Search
Document Center

DataWorks:Data push node

Last Updated:Mar 26, 2026

A data push node queries data from upstream nodes in a workflow and pushes the results to DingTalk groups, Lark groups, WeCom groups, Microsoft Teams, or email. Groups and teams receive the latest data automatically after each scheduled run.

How it works

A data push node reads the output parameters of its ancestor nodes and uses those values as its own input parameters. You reference those values in the push content using placeholders.

Two ancestor node types are supported:

  • SQL query node — queries a data source and exposes query results as output parameters. Reference fields using ${Field name} in the push content.

  • Assignment node — runs custom logic (ODPS SQL, Shell, or Python) and passes output to the push node. Reference values using ${Input parameter name} in the push content.

Channel capabilities

Before you configure a destination, check the size limits that apply to your target channel:

ChannelLimit
DingTalk20 KB per message
Lark20 KB per message; images must be less than 10 MB
WeCom20 messages per chatbot per minute
Microsoft Teams28 KB per message
EmailOne email body per data push task

Format guidance:

FormatUse whenAncestor node
MarkdownYou want a narrative message with inline data valuesSQL query node or assignment node
TableYou want to display query results as a structured gridSQL query node only

Markdown example — the push body might look like:

## Daily sales report

- Region: ${region}
- Total orders: ${total_orders}
- Revenue: ${revenue}
- Report date: ${report_date}

Prerequisites

Before you begin, make sure you have:

If your serverless resource group was created before June 28, 2024, submit a ticket to upgrade it before proceeding.

Limitations

  • Email: Only one email body can be added per data push task.

  • Email SMTP: Additional limits depend on the email service you use. Check the Simple Mail Transfer Protocol (SMTP) limits of your provider.

  • Supported regions: The data push feature is available only in the following regions: China (Hangzhou), China (Shanghai), China (Beijing), China (Shenzhen), China (Chengdu), China (Hong Kong), Singapore, Malaysia (Kuala Lumpur), US (Silicon Valley), US (Virginia), and Germany (Frankfurt).

Step 1: Create an ancestor node

A data push node cannot query data on its own — it relies on an ancestor node to produce the data. Create either an SQL query node or an assignment node first.

To push MaxCompute data, use an assignment node, not an SQL query node. See MaxCompute data push.

Create an SQL query node

  1. Log on to the DataWorks console. In the top navigation bar, select your region. In the left-side navigation pane, choose Data Development and O&M > Data Development. Select your workspace and click Go to Data Development.

  2. In DataStudio, double-click your workflow. On the workflow configuration tab, click the image icon and select the node type that matches your data source. In the Create Node dialog box, configure the node parameters and click Confirm.

  3. Double-click the created SQL query node and write the query code.

    You cannot push data from an ODPS SQL node directly. Create an assignment node instead and write the SQL statement there. See Configure data push flows in the workflow.
  4. In the right-side navigation pane, click the Properties tab and configure the basic, time, resource, dependency, and context settings. See Configure basic properties, Configure time properties, Configure the resource property, Configure same-cycle scheduling dependencies, and Configure node context.

  5. On the Properties tab, click the drop-down arrow next to Input and Output Parameters. Next to Output Parameters, click Add assignment parameter to add the outputs parameter.

  6. In the top toolbar, click the image icon to save.

Create an assignment node

  1. Log on to the DataWorks console. In the top navigation bar, select your region. In the left-side navigation pane, choose Data Development and O&M > Data Development. Select your workspace and click Go to Data Development.

  2. In DataStudio, double-click your workflow. On the workflow configuration tab, click the image icon and select Assignment Node in the General section. In the Create Node dialog box, configure the node parameters and click Confirm.

  3. Double-click the created assignment node. In the Language drop-down list, select ODPS SQL, SHELL, or Python and write the node code. See Assignment node.

  4. In the top toolbar, click the image icon to save.

Step 2: Create a data push node

  1. In DataStudio, double-click your workflow. On the workflow configuration tab, click the image icon and select Data Push in the General section. In the Create Node dialog box, configure the following parameters and click Confirm.

    ParameterDescription
    Node typeSelect Data Push from the drop-down list
    PathSelect the same path as the ancestor node created in Step 1
    NameEnter a name based on your business requirements
  2. Double-click the created data push node to open its configuration tab.

  3. Add the ancestor node as a parent node. In the right-side navigation pane, click Properties. In the Dependencies section, select Node Name from the drop-down list under Parent Nodes, enter the name of the ancestor node, and click Create.

  4. In the Resource Group section of the Properties tab, select a serverless resource group created on or after June 28, 2024.

  5. Add the outputs parameter of the ancestor node as an input parameter of the data push node. On the Properties tab, click the drop-down arrow next to Input and Output Parameters. Click Create next to Input Parameters, enter a parameter name in the Parameter Name column, and select the outputs parameter from the Value Source drop-down list. Close the Properties tab.

  6. Configure the Destination, Title, and Body for the push node. Destination Select a destination from the Destination drop-down list. If no destination is available, click Create Destination and fill in the following: For Lark webhooks, see Configure a Lark Webhook trigger. For Teams webhooks, see Create incoming webhooks with Workflows for Microsoft Teams. To manage existing destinations, go to DataService Studio > Service Development tab. In the lower-left corner, click the image icon, then click the Destination Management tab. See the Create a webhook destination section in "Data push." Title Enter a title for the message. Body Click Add and select Markdown or Table: See the Configure the push content section in "Data push" for more details.

    • Markdown — write free-form content and embed data values as placeholders:

      • If the ancestor node is an SQL query node, use ${Field name} (the field names returned by the query).

      • If the ancestor node is an assignment node, use ${Input parameter name} (the input parameter names defined on the data push node).

    • Table — select the fields from the SQL query node output to display as a table. This option is available only when the ancestor node is an SQL query node.

    ParameterDescription
    TypeThe push channel: DingTalk, Lark, WeCom, Microsoft Teams, or Email
    Destination nameA name based on your business requirements
    WebHookThe chatbot webhook URL for DingTalk, Lark, or WeCom; the incoming webhook URL for Microsoft Teams; or the SMTP address for Email. Get these from the respective platform.
  7. In the top toolbar, click the image icon to save.

Step 3: Test, commit, and deploy

After configuring the workflow, test all data push flows before deploying.

  1. On the workflow configuration tab, click the image icon to run the workflow.

  2. When the image icon appears next to all nodes, click the image icon to commit.

  3. In the Commit dialog box, select the nodes to commit, enter a description, and select Ignore I/O Inconsistency Alerts.

  4. Click Confirm.

  5. Deploy the nodes. See Publish tasks.

Best practices

DataWorks supports several data push patterns for different scenarios. See Best practice for configuring data push nodes in a workflow for examples covering simple data push, combined data push, script data push, conditional data push, and MaxCompute data push.

What's next

After all nodes are deployed, monitor and manage them in Operation Center. See Perform basic O&M operations on auto triggered nodes.