All Products
Search
Document Center

DataWorks:Configure and use scheduling parameters

Last Updated:Oct 27, 2025

In data workflows, task code such as SQL often needs to change dynamically with the schedule time to process different data partitions. You can use scheduling parameters to avoid changing code manually. Set placeholders in your code. The system then automatically replaces these placeholders with dynamic values, such as the data timestamp and the scheduled runtime. This automates and parameterizes your workflow.

Core configuration process

To use scheduling parameters, define them and assign values in the Scheduling Configurations section. After you test the code, submit it to Operation Center. The system then automatically schedules the task and dynamically replaces the scheduling parameters with their configured values.

Step

Action

Core Goal

1. Define parameters

In the node code, you can define one or more parameters using the ${param} format.

This reserves a placeholder for a dynamic value.

2. Configure parameters

In the Scheduling Configurations > Scheduling Parameters panel for the node, assign values to the variables in the code.

This associates the placeholder ${param} with specific scheduling parameters, such as $bizdate or $[yyyymmdd-1].

3. Test

You can use the Smoke Testing feature to simulate a specific data timestamp and verify the correctness of parameter replacement and code execution.

This ensures that the configuration is correct in the development environment.

4. Publish and verify

Submit the node to the production environment, and then confirm the final parameter configuration in Operation Center.

This ensures that the parameters of the production task meet expectations.

Steps

1. Define parameters

image

  1. Double-click the target node, such as an ODPS SQL node, to open the node editor.

  2. Define parameters in the code. In the code of an ODPS SQL node or another SQL node, you can use the ${param} syntax to define a parameter name. DataWorks recommends that you use meaningful parameter names for easier reference and management.

    Scheduling parameter call formats:

    Format type

    Call syntax

    Scope

    Notes

    General format

    ${ParameterName}

    Applies to most node types, such as ODPS SQL and synchronization nodes.

    This is the most common format.

    Special format

    Varies by node. Not in the ${...} format.

    PyODPS, Shell

    For more information, see Examples of scheduling parameter configurations for different node types.

    -- Example: Define a variable named pt_date for partition filtering
    SELECT * FROM my_table WHERE ds = '${pt_date}'; 
  3. On the right side of the page, click Scheduling Configurations to go to the Scheduling Parameters section.

  4. Configure the scheduling parameters as described in the next section.

2. Configure parameters

You can set scheduling parameters in two ways: Visual Definition and Define By Expression. You can switch between these modes by clicking Define By Expression in the upper-right corner of the parameter list. The default mode is Visual Definition.

  1. Configure parameters

    Visual definition

    Click Scheduling Configurations to the right of the node to open the scheduling parameter configuration interface.

    image

    • Add parameters

      To the right of the node, click Scheduling Configurations. In the Scheduling Parameters section, you can add parameters in one of the following two ways.

      1. Click Add Parameter and manually enter the parameter name and value. The parameter name must match the variable name defined in the code.

      2. Click Load Parameters From Code. DataWorks automatically parses variables in the code, such as ${pt_date}, and adds them as parameters. Then, enter a value for each parameter.

    • Assign values to parameters

      You can set built-in system variables, custom time variables, and constants.

      • Click the input box. The drop-down list shows some common parameter expressions that you can select directly. You can also manually enter a custom expression or a built-in system variable.

      • Enter values as needed. For more information about the supported range of parameter values, see Supported formats for scheduling parameters.

    Define by expression

    Click Define By Expression to configure parameters using expressions.

    image

    • When you use an expression to define multiple parameters, you must separate them with spaces.

    • When you use the Define By Expression method to add, delete, or modify scheduling parameters, DataWorks validates the expression syntax. You cannot configure the scheduling parameter if the syntax is invalid.

      For example, DataWorks checks for syntax rules such as no spaces are allowed on either side of the equal sign.

  2. Parameter preview

    After you define the parameters, click Scheduling Parameter Preview to preview the parameters for the next N instances that run after a specified data timestamp. This helps you verify that the parameter definitions are configured as expected. You can adjust the data timestamp and the number of instances for the preview.

    image

Note

Some nodes, such as offline synchronization nodes, have a built-in ${bizdate} parameter. This parameter is automatically assigned the value $bizdate. You can replace the bizdate parameter name in the code with a custom one. The ${bizdate} parameter itself has no special meaning and is the same as any other custom parameter.

image

3. Smoke testing

After you assign values to the scheduling parameters, you can use the smoke testing feature. You can configure a data timestamp to simulate the scheduling scenario for the target node. You can verify that the code execution and parameter replacement work as expected in this scenario. If they do not, you must adjust the settings as needed to prevent issues with normal task scheduling.

Smoke testing generates instances and incurs instance fees. For more information about instance fees, see Billing of serverless resource groups.
  1. Submit the node code.

    1. Configure the schedule time and scheduling dependencies.

    2. Click the save icon 保存 to save the code and configuration. Then, click the submit icon 提交. You can use the smoke testing feature in the development environment only after the latest code for the node is submitted to Operation Center.

      Note

      If you find that the smoke test is not running the latest code or parameters, you must submit the node again.

  2. Run a smoke test.

    Click the image smoke testing icon in the toolbar. In the Smoke Testing dialog box, select a data timestamp and click OK to run the smoke test.

  3. View smoke test logs.

    1. In the Smoke Test Records window, find the latest record and click View Log.

      image

    2. In the log, check the parameter output to confirm that it meets your expectations.

      Note

      If you accidentally close the window, you can click the image smoke test records icon in the toolbar to reopen it.

Important

The Run image and Advanced Run image features require you to manually assign constants to variables in the code. Therefore, you cannot use these features to verify whether the configured scheduling parameters work as expected.

4. Publish and verify

After verification in the development environment, you can submit and publish the task to Operation Center for production and automatic scheduling. After you publish the task, you must check the scheduling parameters in the production environment to prevent runtime errors.

Note

If the scheduling parameter configuration of the auto triggered task is not as expected, or if you cannot find the target task in Operation Center, you must confirm that the task was published successfully. For more information about how to publish tasks, see Publish a task.

  1. Check parameter definitions.

    Go to Operation Center, switch to the destination region and workspace, and navigate to the Recurring Task O&M > Recurring Tasks page. In the task list, click the task name and verify that the execution parameters in the Properties panel are correct.

    image

  2. Smoke testing in Operation Center.

    In Operation Center, you can also use smoke testing to confirm whether a submitted and published task performs parameter replacement and code execution as expected in the production environment. For more information, see Run a test and view the test instance.

    Important

    Note that smoke testing will actually execute on production data. You must proceed with caution to avoid contaminating the production database.

    image

  3. Observe actual scheduling results.

    After the task is automatically scheduled, you can further verify that the parameters were replaced as required by checking them in the Recurring Instance.

    image

Complete configuration example

This topic uses an ODPS SQL node as an example to show how to use the Smoke Testing Feature in the Development Environment to test whether the configured scheduling parameters work as expected. It also shows how to view the scheduling parameter configuration of the task in Operation Center after the node is published.

Note

For more information about how to configure scheduling parameters for different types of nodes, see Examples of scheduling parameter configurations for different node types.

  1. Edit the node code and configure scheduling parameters.

    The following figure shows the code and scheduling parameter configuration of the ODPS SQL node.节点代码及参数配置

    1. Define variables in the code.

      -- Assign built-in system parameters
      SELECT '${var1}';
      SELECT '${var2}';
      -- Assign custom parameters
      SELECT '${var3}';
      SELECT '${var4}';
      -- Assign a constant
      SELECT '${var5}';
    2. Assign values to the variables.

      In the Scheduling Configurations > Scheduling Parameters section, you can assign values to the variables as shown in Area 2. For more information about value formats, see Supported formats for scheduling parameters.

      • var1=$bizdate: the data timestamp in yyyymmdd format.

      • var2=$cyctime: the scheduled runtime of the task in yyyymmddhh24miss format.

      • var3=${yyyymmdd}: the data timestamp in yyyymmdd format.

      • var4=$[yyyymmddhh24miss]: the scheduled runtime of the task in yyyymmddhh24miss format.

      • var5=Hangzhou: sets the value of var5 to the constant Hangzhou.

    3. Optional: Configure the schedule time.

      Configure the ODPS SQL node to be scheduled hourly (as shown in Area 3).

      Note

      You can configure the time period as needed. This topic uses an example where a time period is added.

      • Start time is 16:00.

      • End time is 23:59.

      • Interval is 1 hour.

      For more information about time period configuration, see Time property configuration instructions.

    4. Set scheduling dependencies.

      Configure scheduling dependencies for the development node. For more information, see Configure scheduling dependencies. In this example, the root node is used as the upstream dependency for this node.

  2. In the toolbar at the top of the node editor, click the 保存 save icon and then the 提交 submit icon to save and submit the configuration of the ODPS SQL node.

  3. Run a smoke test in the development environment.

    1. Click the 开发环境冒烟测试 icon. In the Smoke Testing dialog box, you can configure the business time to simulate the scheduling period for the node.

      配置业务时间The business time is configured as follows:

      • Data Timestamp: 2025-10-16

      • Start Time: 16:00

      • End Time: 17:00

      The ODPS SQL task is an hourly scheduled task. Two instances are generated for the task at 16:00 and 17:00 on 2025-10-17.

      Note

      Because the data timestamp is one day before the runtime, the actual runtime of the task is 2025-10-17.

      The expected values for the 16:00 node are as follows:

      • var1=20251016.

      • var2=20251017160000.

      • var3=20251016.

      • var4=20251017160000.

      The expected values for the 17:00 node are as follows:

      • var1=20251016.

      • var2=20251017170000.

      • var3=20251016.

      • var4=20251017170000.

    2. Click OK. The node is scheduled to run at the specified time.

    3. After the runtime ends, click the 查看冒烟测试日志 icon to view the smoke test logs.

      The two instances that are generated by the node run successfully, and the node's execution result meets expectations.

      image

      image

      image

  4. If your workspace is in standard mode, you need to publish the node to the production environment. On the ODPS SQL node editor page, click Publish in the upper-right corner of the top menu bar. For more information about publishing a node, see Publish a node.

  5. Go to Operation Center to confirm the scheduling parameter configuration of the node.

    生产环境参数配置

    1. On the top menu bar of DataStudio, click Operation Center in the upper-right corner to open the Operation Center page.

    2. On the Recurring Task O&M > Recurring Tasks page, you can search for the target node.

      Note

      You can search for the node on the Recurring Tasks page only after it is successfully published.

    3. Click the target node name and view the Execution Parameters on the Properties tab.

      In this example, the execution parameters of the node are var1=$bizdate var2=$cyctime var3=${yyyymmdd} var4=$[yyyymmddhh24miss], which meets expectations.

    4. After a scheduled instance is generated, click the Recurring Instance menu, search for the task name, and click the task instance name. On the Properties tab, you can view the replaced parameters under Execution Parameters.

      In this example, the execution parameters of the node are var1=20251016 var2=20251017160000 var3=20251016 var4=20251017160000, which meets expectations.

References