All Products
Search
Document Center

DataWorks:Configure and use node context parameters

Last Updated:Mar 26, 2026

Node context parameters let an upstream node (producer) pass runtime output to one or more downstream nodes (consumers). Downstream nodes reference these values directly in their code, so each run adapts to the actual output of the upstream node without hardcoding values.

How it works

All parameter passing follows the same three-step pattern:

  1. The upstream node generates a value and exposes it as an output parameter.

  2. The downstream node declares an input parameter and binds it to that output parameter. DataWorks automatically creates a same-cycle dependency between the two nodes.

  3. At runtime, the downstream node's code reads the value using ${InputParameterName}.

Example: If the upstream node outputs table_A, the downstream code SELECT * FROM ${input}; becomes SELECT * FROM table_A; at runtime.

There are two ways to expose a value from an upstream node:

MethodHow the value is setSupported node types
Pass a constant or variableManually set in the Node Output Parameters section. Accepts constants, system context variables, and scheduling parameters.
Pass an assignment resultThe system captures the last query result in the node's code and assigns it to the built-in outputs parameter automatically.Assignment nodes (MaxCompute SQL, Python 2, Shell) and nodes that support assignment parameters (see Limitations)

Limitations

The Add Assignment Parameter feature requires DataWorks Standard Edition or higher.

Nodes that support Add Assignment Parameter:

  • EMR Hive

  • EMR Spark SQL

  • ODPS Script

  • Hologres SQL

  • AnalyticDB for PostgreSQL

  • ClickHouse SQL

  • Database node types

Step 1: Configure the upstream node to output a parameter

  1. Log on to the DataWorks console. Switch to your region, then choose Data Development & O&M > Data Development in the navigation pane. Select your workspace and click Go to Data Studio.

  2. In Data Studio, double-click the upstream node to open its editor.

  3. Click Scheduling on the right side of the canvas. Under Input and Output Parameters, go to the Node Output Parameters section.

  4. Choose one of the following methods to expose the value.

Method 1: Pass a constant or variable

  1. Click Add Parameter.

  2. Set the parameter fields:

    FieldDescription
    Parameter NameA custom name, for example, my_param
    Parameter ValueA constant (for example, hello), a system context variable (for example, ${status}), or a scheduling parameter (for example, $bizdate)

Method 2: Pass an assignment result

Option A: Use an assignment node

An assignment node automatically captures the last query or output result from its code (MaxCompute SQL, Python 2, or Shell) and assigns it to the built-in outputs parameter. Downstream nodes reference outputs to retrieve the value. For details, see Assignment node.

Option B: Use an assignment parameter

For nodes that support assignment parameters:

image
  1. In Node Output Parameters, click Add Assignment Parameter.

  2. The system adds an output parameter named outputs. No additional configuration is needed — its value is the result of the last query in the node's code.

  3. Click Save.

Important

If the query returns an empty result, the current node continues running, but downstream nodes that reference the parameter may fail.

Important

Before deleting an output parameter, confirm that no downstream nodes reference it. Deleting a parameter that downstream tasks depend on will cause those tasks to fail.

Step 2: Configure the downstream node to use the parameter

  1. Open the downstream node's editor. Navigate to Scheduling > Input and Output Parameters > Node input parameters and click Add Parameter.

  2. Set the Value Source to the output parameter of the upstream node, and specify a Parameter Value name for use in this node's code.

  3. Click Save in the toolbar. DataWorks automatically creates a same-cycle dependency from the downstream node to the upstream node. No manual dependency configuration is needed.

  4. Reference the parameter in the node's code using ${InputParameterName}.

    Shell example:

    echo "The value from the upstream node is ${param}"

    Array access: When the upstream node is an SQL node, the output is a two-dimensional array. When the upstream node is a Python or Shell node, the output is a one-dimensional array. Access individual elements as follows (indexes are 0-based):

    Upstream node typeAccess patternReturns
    SQL node${param*}Entire row
    SQL node${param[j]}Cell at index j
    Python / Shell node${param*}Entire row

Step 3: Run and verify

Context parameters are passed between nodes only when instances run as part of a scheduled workflow. Running a downstream node in isolation skips parameter retrieval and causes the task to fail.

To test parameter passing, always run from the upstream node:

OptionHow to do it
Run the full workflowIn the workflow toolbar, click Run
Run up to a specific nodeRight-click the downstream node and select Run to This Node

In the generated DAG instance, click any node to view its run log and confirm the output is as expected.

System context variables

Use these variables as Parameter Value when configuring output parameters on an upstream node.

VariableDescription
${projectId}Project ID
${projectName}MaxCompute project name
${nodeId}Node ID
${gmtdate}Start of the scheduled day (00:00:00), formatted as yyyy-MM-dd 00:00:00
${taskId}Task instance ID
${seq}Sequence number of the task instance among all instances of the same node on the same day
${cyctime}Scheduled run time of the instance
${status}Instance status: SUCCESS or FAILURE
${bizdate}Data timestamp
${finishTime}Time when the instance finished running
${taskType}Run type: NORMAL, MANUAL, PAUSE, SKIP (dry-run), UNCHOOSE, or SKIP_CYCLE (cycle dry-run)
${nodeName}Node name