All Products
Search
Document Center

DataWorks:Configure and use scheduling parameters

Last Updated:Mar 26, 2026

Task code such as SQL often needs to reference different data partitions on each run. Manually updating the code each time is error-prone and unsustainable. Scheduling parameters let you define placeholders once in your node code and assign values in Scheduling Configurations. At runtime, DataWorks replaces every placeholder with the actual data timestamp, scheduled runtime, or a constant you specify—no code changes required.

How it works

StageActionGoal
1. DefineAdd ${param} placeholders in the node codeReserve slots for dynamic values
2. ConfigureAssign values in Scheduling Configurations > Scheduling ParametersBind placeholders to built-in variables, time expressions, or constants
3. TestRun a smoke test in the development environmentVerify that parameters resolve correctly before going live
4. Publish and verifyPublish the node to Operation Center, then confirm parameter valuesPrevent runtime errors in production

Prerequisites

Before you begin, ensure that you have:

  • A DataWorks workspace with at least one node (for example, an ODPS SQL node)

  • Permission to edit nodes and access Scheduling Configurations

Step 1: Define parameters in the node code

image
  1. Open the node editor by double-clicking the target node.

  2. In the node code, add placeholders using ${ParameterName}. Use descriptive names so each placeholder is easy to identify later.

    -- Example: filter by partition date
    SELECT * FROM my_table WHERE ds = '${pt_date}';

    The ${ParameterName} syntax applies to most node types, including ODPS SQL and synchronization nodes. PyODPS and Shell nodes use a different format—see Examples of scheduling parameter configurations for different node types.

  3. In the right panel, click Scheduling Configurations to open the Scheduling Parameters section.

Step 2: Assign values to parameters

DataWorks offers two modes for assigning values. The default is Visual Definition.

Visual definition (default)

Click Scheduling Configurations to the right of the node editor.

image

Add parameters in either of the following ways:

  • Add Parameter: Enter the parameter name manually. The name must match the placeholder in the code exactly.

  • Load Parameters From Code: DataWorks parses the code automatically, finds all ${...} placeholders, and adds them as parameters. Enter a value for each one.

For each parameter, assign a value:

  • Click the input field to see a drop-down list of common expressions you can select directly.

  • Type a built-in system variable, a custom time expression, or a constant. For a complete list of supported formats, see Supported formats for scheduling parameters.

Built-in system variables

VariableFormatWhat it represents
$bizdateyyyymmddThe data timestamp—always one day before the scheduled runtime. Use this to reference the partition being processed, not the day the task runs.
$cyctimeyyyymmddhh24missThe scheduled runtime of the task. Use this when you need the exact time the task is triggered.
${yyyymmdd}yyyymmddThe data timestamp written as a custom time expression. Equivalent in value to $bizdate.
$[yyyymmddhh24miss]yyyymmddhh24missThe scheduled runtime written as a custom time expression. Equivalent in value to $cyctime.
Some nodes, such as offline synchronization nodes, include a built-in ${bizdate} placeholder that is pre-assigned the value $bizdate. You can rename bizdate to any custom name—it has no special meaning beyond this default assignment.
image

Define by expression

Click Define By Expression in the upper-right corner of the parameter list to switch modes.

image

In this mode, enter all parameters as a single expression string:

  • Separate multiple parameters with spaces.

  • Do not add spaces on either side of the equal sign. DataWorks validates this syntax and blocks saving if the expression is invalid.

Preview parameter values

After assigning values, click Scheduling Parameter Preview to see how the parameters resolve across the next N instances starting from a specified data timestamp. Adjust the timestamp and instance count as needed to confirm the configuration is correct.

image

Step 3: Smoke testing

Smoke testing simulates a real scheduling run in the development environment, letting you verify parameter replacement and code execution before the node goes live.

Important

Smoke testing generates actual instances and incurs instance fees. For billing details, see Billing of serverless resource groups.

  1. Configure the schedule time and scheduling dependencies for the node.

  2. Save and submit the node: click the save icon 保存, then the submit icon 提交. The smoke test always runs the code from the last submitted version—if you update the node after submitting, submit again before testing.

  3. Click the smoke testing icon image in the toolbar. In the Smoke Testing dialog box, select a data timestamp and click OK.

  4. In Smoke Test Records, find the latest record and click View Log. Check the log output to confirm that each parameter resolved to the expected value.

    If you close the window accidentally, click the smoke test records icon image in the toolbar to reopen it.

    image

Important

The Run image and Advanced Run image features require you to assign constants to variables manually. They do not simulate scheduling parameter resolution, so you cannot use them to verify that your scheduling parameter configuration works as expected.

Step 4: Publish and verify

After the smoke test passes, publish the node to Operation Center for production scheduling.

If the node does not appear in Operation Center or if the scheduling parameters look incorrect, confirm that the publish succeeded. For details, see Publish a task.
  1. Publish the node. For standard mode workspaces, click Publish in the upper-right corner of the node editor. For more information, see Publish a node.

  2. Verify the parameter configuration in Operation Center. Go to Operation Center > Recurring Task O&M > Recurring Tasks. Search for the node by name, click it, and check Execution Parameters on the Properties tab.

    image

  3. (Optional) Run a smoke test in Operation Center. Operation Center also supports smoke testing to validate parameter replacement in the production environment. Note that this test runs against production data—proceed with caution to avoid unintended data changes. For details, see Run a test and view the test instance.

    image

  4. Confirm resolved values after scheduling. Once the task runs automatically, go to Recurring Instance, click the task instance name, and view the resolved Execution Parameters on the Properties tab to confirm the final substituted values.

    image

Complete configuration example

This example walks through the full workflow using an ODPS SQL node with five scheduling parameters: two built-in system variables, two custom time expressions, and one constant.

For node-type-specific configuration examples, see Examples of scheduling parameter configurations for different node types.

Configure the node

節點代碼及參數配置
  1. Define variables in the node code:

    -- Built-in system variables
    SELECT '${var1}';
    SELECT '${var2}';
    -- Custom time expressions
    SELECT '${var3}';
    SELECT '${var4}';
    -- Constant
    SELECT '${var5}';
  2. In Scheduling Configurations > Scheduling Parameters, assign values:

    VariableValueResolves to
    var1$bizdateData timestamp in yyyymmdd format
    var2$cyctimeScheduled runtime in yyyymmddhh24miss format
    var3${yyyymmdd}Data timestamp in yyyymmdd format
    var4$[yyyymmddhh24miss]Scheduled runtime in yyyymmddhh24miss format
    var5HangzhouFixed constant
  3. (Optional) Set the schedule to run hourly: For details on time period configuration, see Time property configuration instructions.

    • Start time: 16:00

    • End time: 23:59

    • Interval: 1 hour

  4. Configure scheduling dependencies. In this example, the root node is the upstream dependency. For details, see Configure scheduling dependencies.

  5. Click the save icon 保存, then the submit icon 提交 to save and submit the node.

Run a smoke test

  1. Click the smoke testing icon 开发环境冒烟测试. In the Smoke Testing dialog box, set the business time: Because the node runs hourly, two instances are generated: one at 16:00 and one at 17:00 on 2025-10-17. The actual runtime is 2025-10-17 because the data timestamp is always one day before the scheduled runtime.

    • Data Timestamp: 2025-10-16

    • Start Time: 16:00

    • End Time: 17:00

    配置業務時間

  2. Click OK to start the smoke test.

  3. After the run completes, click the smoke test records icon 查看冒烟测试日志 to view the logs. The expected parameter values for each instance are:

    Variable16:00 instance17:00 instance
    var12025101620251016
    var22025101716000020251017170000
    var32025101620251016
    var42025101716000020251017170000

    image

    image

    image

Verify in Operation Center

  1. Publish the node: click Publish in the upper-right corner of the node editor.

  2. In DataStudio, click Operation Center in the upper-right corner of the top menu bar.

  3. Go to Recurring Task O&M > Recurring Tasks and search for the node. The node appears in the list only after a successful publish.

  4. Click the node name and view the Execution Parameters on the Properties tab. The execution parameters should read: var1=$bizdate var2=$cyctime var3=${yyyymmdd} var4=$[yyyymmddhh24miss]

    生產環境參數配置

  5. After a scheduled instance runs, go to Recurring Instance, find the task instance, and check Execution Parameters on the Properties tab to see the resolved values: var1=20251016 var2=20251017160000 var3=20251016 var4=20251017160000

What's next