Task code such as SQL often needs to reference different data partitions on each run. Manually updating the code each time is error-prone and unsustainable. Scheduling parameters let you define placeholders once in your node code and assign values in Scheduling Configurations. At runtime, DataWorks replaces every placeholder with the actual data timestamp, scheduled runtime, or a constant you specify—no code changes required.
How it works
| Stage | Action | Goal |
|---|---|---|
| 1. Define | Add ${param} placeholders in the node code | Reserve slots for dynamic values |
| 2. Configure | Assign values in Scheduling Configurations > Scheduling Parameters | Bind placeholders to built-in variables, time expressions, or constants |
| 3. Test | Run a smoke test in the development environment | Verify that parameters resolve correctly before going live |
| 4. Publish and verify | Publish the node to Operation Center, then confirm parameter values | Prevent runtime errors in production |
Prerequisites
Before you begin, ensure that you have:
A DataWorks workspace with at least one node (for example, an ODPS SQL node)
Permission to edit nodes and access Scheduling Configurations
Step 1: Define parameters in the node code

Open the node editor by double-clicking the target node.
In the node code, add placeholders using
${ParameterName}. Use descriptive names so each placeholder is easy to identify later.-- Example: filter by partition date SELECT * FROM my_table WHERE ds = '${pt_date}';The
${ParameterName}syntax applies to most node types, including ODPS SQL and synchronization nodes. PyODPS and Shell nodes use a different format—see Examples of scheduling parameter configurations for different node types.In the right panel, click Scheduling Configurations to open the Scheduling Parameters section.
Step 2: Assign values to parameters
DataWorks offers two modes for assigning values. The default is Visual Definition.
Visual definition (default)
Click Scheduling Configurations to the right of the node editor.

Add parameters in either of the following ways:
Add Parameter: Enter the parameter name manually. The name must match the placeholder in the code exactly.
Load Parameters From Code: DataWorks parses the code automatically, finds all
${...}placeholders, and adds them as parameters. Enter a value for each one.
For each parameter, assign a value:
Click the input field to see a drop-down list of common expressions you can select directly.
Type a built-in system variable, a custom time expression, or a constant. For a complete list of supported formats, see Supported formats for scheduling parameters.
Built-in system variables
| Variable | Format | What it represents |
|---|---|---|
$bizdate | yyyymmdd | The data timestamp—always one day before the scheduled runtime. Use this to reference the partition being processed, not the day the task runs. |
$cyctime | yyyymmddhh24miss | The scheduled runtime of the task. Use this when you need the exact time the task is triggered. |
${yyyymmdd} | yyyymmdd | The data timestamp written as a custom time expression. Equivalent in value to $bizdate. |
$[yyyymmddhh24miss] | yyyymmddhh24miss | The scheduled runtime written as a custom time expression. Equivalent in value to $cyctime. |
Some nodes, such as offline synchronization nodes, include a built-in${bizdate}placeholder that is pre-assigned the value$bizdate. You can renamebizdateto any custom name—it has no special meaning beyond this default assignment.

Define by expression
Click Define By Expression in the upper-right corner of the parameter list to switch modes.

In this mode, enter all parameters as a single expression string:
Separate multiple parameters with spaces.
Do not add spaces on either side of the equal sign. DataWorks validates this syntax and blocks saving if the expression is invalid.
Preview parameter values
After assigning values, click Scheduling Parameter Preview to see how the parameters resolve across the next N instances starting from a specified data timestamp. Adjust the timestamp and instance count as needed to confirm the configuration is correct.

Step 3: Smoke testing
Smoke testing simulates a real scheduling run in the development environment, letting you verify parameter replacement and code execution before the node goes live.
Smoke testing generates actual instances and incurs instance fees. For billing details, see Billing of serverless resource groups.
Configure the schedule time and scheduling dependencies for the node.
Save and submit the node: click the save icon
, then the submit icon
. The smoke test always runs the code from the last submitted version—if you update the node after submitting, submit again before testing.Click the smoke testing icon
in the toolbar. In the Smoke Testing dialog box, select a data timestamp and click OK.In Smoke Test Records, find the latest record and click View Log. Check the log output to confirm that each parameter resolved to the expected value.
If you close the window accidentally, click the smoke test records icon
in the toolbar to reopen it.
The Run
and Advanced Run
features require you to assign constants to variables manually. They do not simulate scheduling parameter resolution, so you cannot use them to verify that your scheduling parameter configuration works as expected.
Step 4: Publish and verify
After the smoke test passes, publish the node to Operation Center for production scheduling.
If the node does not appear in Operation Center or if the scheduling parameters look incorrect, confirm that the publish succeeded. For details, see Publish a task.
Publish the node. For standard mode workspaces, click Publish in the upper-right corner of the node editor. For more information, see Publish a node.
Verify the parameter configuration in Operation Center. Go to Operation Center > Recurring Task O&M > Recurring Tasks. Search for the node by name, click it, and check Execution Parameters on the Properties tab.

(Optional) Run a smoke test in Operation Center. Operation Center also supports smoke testing to validate parameter replacement in the production environment. Note that this test runs against production data—proceed with caution to avoid unintended data changes. For details, see Run a test and view the test instance.

Confirm resolved values after scheduling. Once the task runs automatically, go to Recurring Instance, click the task instance name, and view the resolved Execution Parameters on the Properties tab to confirm the final substituted values.

Complete configuration example
This example walks through the full workflow using an ODPS SQL node with five scheduling parameters: two built-in system variables, two custom time expressions, and one constant.
For node-type-specific configuration examples, see Examples of scheduling parameter configurations for different node types.
Configure the node

Define variables in the node code:
-- Built-in system variables SELECT '${var1}'; SELECT '${var2}'; -- Custom time expressions SELECT '${var3}'; SELECT '${var4}'; -- Constant SELECT '${var5}';In Scheduling Configurations > Scheduling Parameters, assign values:
Variable Value Resolves to var1$bizdateData timestamp in yyyymmddformatvar2$cyctimeScheduled runtime in yyyymmddhh24missformatvar3${yyyymmdd}Data timestamp in yyyymmddformatvar4$[yyyymmddhh24miss]Scheduled runtime in yyyymmddhh24missformatvar5HangzhouFixed constant (Optional) Set the schedule to run hourly: For details on time period configuration, see Time property configuration instructions.
Start time:
16:00End time:
23:59Interval:
1hour
Configure scheduling dependencies. In this example, the root node is the upstream dependency. For details, see Configure scheduling dependencies.
Click the save icon
, then the submit icon
to save and submit the node.
Run a smoke test
Click the smoke testing icon
. In the Smoke Testing dialog box, set the business time: Because the node runs hourly, two instances are generated: one at 16:00 and one at 17:00 on 2025-10-17. The actual runtime is 2025-10-17 because the data timestamp is always one day before the scheduled runtime.Data Timestamp:
2025-10-16Start Time:
16:00End Time:
17:00

Click OK to start the smoke test.
After the run completes, click the smoke test records icon
to view the logs. The expected parameter values for each instance are:Variable 16:00 instance 17:00 instance var12025101620251016var22025101716000020251017170000var32025101620251016var42025101716000020251017170000


Verify in Operation Center
Publish the node: click Publish in the upper-right corner of the node editor.
In DataStudio, click Operation Center in the upper-right corner of the top menu bar.
Go to Recurring Task O&M > Recurring Tasks and search for the node. The node appears in the list only after a successful publish.
Click the node name and view the Execution Parameters on the Properties tab. The execution parameters should read:
var1=$bizdate var2=$cyctime var3=${yyyymmdd} var4=$[yyyymmddhh24miss]
After a scheduled instance runs, go to Recurring Instance, find the task instance, and check Execution Parameters on the Properties tab to see the resolved values:
var1=20251016 var2=20251017160000 var3=20251016 var4=20251017160000