Node context parameters let an upstream node (producer) pass runtime output to one or more downstream nodes (consumers). Downstream nodes reference these values directly in their code, so each run adapts to the actual output of the upstream node without hardcoding values.
How it works
All parameter passing follows the same three-step pattern:
The upstream node generates a value and exposes it as an output parameter.
The downstream node declares an input parameter and binds it to that output parameter. DataWorks automatically creates a same-cycle dependency between the two nodes.
At runtime, the downstream node's code reads the value using
${InputParameterName}.
Example: If the upstream node outputs table_A, the downstream code SELECT * FROM ${input}; becomes SELECT * FROM table_A; at runtime.
There are two ways to expose a value from an upstream node:
| Method | How the value is set | Supported node types |
|---|---|---|
| Pass a constant or variable | Manually set in the Node Output Parameters section. Accepts constants, system context variables, and scheduling parameters. | — |
| Pass an assignment result | The system captures the last query result in the node's code and assigns it to the built-in outputs parameter automatically. | Assignment nodes (MaxCompute SQL, Python 2, Shell) and nodes that support assignment parameters (see Limitations) |
Limitations
The Add Assignment Parameter feature requires DataWorks Standard Edition or higher.
Nodes that support Add Assignment Parameter:
EMR Hive
EMR Spark SQL
ODPS Script
Hologres SQL
AnalyticDB for PostgreSQL
ClickHouse SQL
Database node types
Step 1: Configure the upstream node to output a parameter
Log on to the DataWorks console. Switch to your region, then choose Data Development & O&M > Data Development in the navigation pane. Select your workspace and click Go to Data Studio.
In Data Studio, double-click the upstream node to open its editor.
Click Scheduling on the right side of the canvas. Under Input and Output Parameters, go to the Node Output Parameters section.
Choose one of the following methods to expose the value.
Method 1: Pass a constant or variable
Click Add Parameter.
Set the parameter fields:
Field Description Parameter Name A custom name, for example, my_paramParameter Value A constant (for example, hello), a system context variable (for example,${status}), or a scheduling parameter (for example,$bizdate)
Method 2: Pass an assignment result
Option A: Use an assignment node
An assignment node automatically captures the last query or output result from its code (MaxCompute SQL, Python 2, or Shell) and assigns it to the built-in outputs parameter. Downstream nodes reference outputs to retrieve the value. For details, see Assignment node.
Option B: Use an assignment parameter
For nodes that support assignment parameters:

In Node Output Parameters, click Add Assignment Parameter.
The system adds an output parameter named
outputs. No additional configuration is needed — its value is the result of the last query in the node's code.Click Save.
If the query returns an empty result, the current node continues running, but downstream nodes that reference the parameter may fail.
Before deleting an output parameter, confirm that no downstream nodes reference it. Deleting a parameter that downstream tasks depend on will cause those tasks to fail.
Step 2: Configure the downstream node to use the parameter
Open the downstream node's editor. Navigate to Scheduling > Input and Output Parameters > Node input parameters and click Add Parameter.
Set the Value Source to the output parameter of the upstream node, and specify a Parameter Value name for use in this node's code.
Click Save in the toolbar. DataWorks automatically creates a same-cycle dependency from the downstream node to the upstream node. No manual dependency configuration is needed.
Reference the parameter in the node's code using
${InputParameterName}.Shell example:
echo "The value from the upstream node is ${param}"Array access: When the upstream node is an SQL node, the output is a two-dimensional array. When the upstream node is a Python or Shell node, the output is a one-dimensional array. Access individual elements as follows (indexes are 0-based):
Upstream node type Access pattern Returns SQL node ${param*}Entire row SQL node ${param[j]}Cell at index j Python / Shell node ${param*}Entire row
Step 3: Run and verify
Context parameters are passed between nodes only when instances run as part of a scheduled workflow. Running a downstream node in isolation skips parameter retrieval and causes the task to fail.
To test parameter passing, always run from the upstream node:
| Option | How to do it |
|---|---|
| Run the full workflow | In the workflow toolbar, click Run |
| Run up to a specific node | Right-click the downstream node and select Run to This Node |
In the generated DAG instance, click any node to view its run log and confirm the output is as expected.
System context variables
Use these variables as Parameter Value when configuring output parameters on an upstream node.
| Variable | Description |
|---|---|
${projectId} | Project ID |
${projectName} | MaxCompute project name |
${nodeId} | Node ID |
${gmtdate} | Start of the scheduled day (00:00:00), formatted as yyyy-MM-dd 00:00:00 |
${taskId} | Task instance ID |
${seq} | Sequence number of the task instance among all instances of the same node on the same day |
${cyctime} | Scheduled run time of the instance |
${status} | Instance status: SUCCESS or FAILURE |
${bizdate} | Data timestamp |
${finishTime} | Time when the instance finished running |
${taskType} | Run type: NORMAL, MANUAL, PAUSE, SKIP (dry-run), UNCHOOSE, or SKIP_CYCLE (cycle dry-run) |
${nodeName} | Node name |