All Products
Search
Document Center

DataWorks:Configure and use scheduling parameters

Last Updated:Nov 14, 2025

In data workflows, task code, such as SQL, often needs to change dynamically to process data partitions for different dates based on the scheduling time. To avoid changing the code manually, you can use scheduling parameters. You can set placeholders in your code. When a task is scheduled, the system automatically replaces these placeholders with dynamic values, such as the data timestamp and scheduled runtime. This process automates your workflow and enables it to run with parameters.

Core configuration flow

To use scheduling parameters, you need to define them and assign their values in the Scheduling Settings. After you test the code in Data Development and confirm that it is correct, submit the code containing the scheduling parameters to the Operation Center. The system then automatically schedules the task and dynamically replaces the scheduling parameter values based on the assignment logic.

Step

Action

Core objective

1. Define parameters

In the node code, use the ${param} format to define one or more parameters.

Reserve a placeholder for a dynamic value.

2. Configure parameters

On the Scheduling Settings > Scheduling Parameters tab for the node, assign values to the variables in the code.

Associate the ${param} placeholder with specific scheduling parameters, such as $bizdate and $[yyyymmdd-1].

3. Test

You can use the Smoke Testing feature to simulate a specific data timestamp and verify that the parameter replacement and code execution are correct.

Ensure that the configuration is correct in the developer environment.

4. Publish and verify

In the Operation Center, submit the node to the production environment and confirm the final parameter settings.

Ensure that the parameters for the online task are as expected.

Steps

1. Define parameters

image

  1. Double-click the target node, such as a MaxCompute SQL node, to go to the node editor.

  2. Define parameters in the code. In a MaxCompute SQL node, use the ${param} syntax to define a parameter name. DataWorks recommends using meaningful parameter names for easy reference and management.

    Scheduling parameter call format:

    Format type

    Call syntax

    Scope

    Notes

    General format

    ${ParameterName}

    Applies to most node types, such as MaxCompute SQL nodes and synchronization nodes.

    This is the most common format.

    Special format

    Varies by node. Not in the ${...} format.

    PyODPS, Shell

    For more information, see Examples of scheduling parameter settings for different node types.

    -- Example: Define a variable named pt_date for partition filtering
    SELECT * FROM my_table WHERE ds = '${pt_date}'; 
  3. In the right pane, click Scheduling Settings to configure the Scheduling Parameters.

  4. Configure the scheduling parameters. The configuration methods are described in the following sections.

2. Configure parameters

You can set scheduling parameters using one of two methods: Define With Table and Define With Expression. You can switch between these methods by clicking Define With Expression in the upper-right corner of the parameter list. The default method is Define With Table.

  1. Configure parameters

    Define with table

    In the right pane of the node, click Scheduling Settings to open the scheduling parameter configuration interface.

    image

    • Add parameters

      In the right-side pane of the node, click Scheduling Settings. In the Scheduling Parameters section, you can add parameters in the following ways.

      1. Click Add Parameter and manually enter the parameter name and value. The parameter name must match the variable name defined in the code.

      2. Click Load Parameters From Code. DataWorks automatically parses the variables in the code, such as ${pt_date}, and backfills the parameters. You only need to enter the parameter values.

    • Assign parameter values

      You can assign values from system built-in parameters, custom time parameters, workspace parameters, output parameters of ancestor nodes, or constants.

      • Click the input box. The drop-down list displays common parameter expressions that you can select. You can also manually enter custom expressions or system built-in variables.

      • You can enter values as needed. For more information about the supported range of parameter values, see Supported formats for scheduling parameters.

      • You can also quickly bind context parameters when you assign values to scheduling parameters. Click the image button next to a parameter. In the Bind Output Parameters Of Ancestor Nodes dialog box, select a node that has configured output parameters to quickly bind the dependency. For more information about output parameters, see Configure and use node context parameters.

    Define with expression

    Click Define With Expression to configure parameters using expressions.

    image

    • When you define parameters using expressions, separate multiple parameters with spaces.

    • When you add, delete, or modify scheduling parameters using the Define With Expression method, DataWorks validates the syntax of the expression. If the syntax is invalid, you cannot save the scheduling parameters.

      For example, DataWorks checks for syntax rules, such as no spaces are allowed on either side of the equal sign (=).

  2. Preview parameters

    After you define the parameters, click Preview Scheduling Parameters to preview the parameters for a specified number of instances at a given data timestamp. This helps you verify that the parameter definitions meet your expectations. You can adjust the data timestamp and the number of instances for the preview.

    image

Note

Some nodes, such as offline synchronization nodes, have a built-in ${bizdate} parameter. This parameter name is automatically assigned the value $bizdate. You can replace the bizdate parameter name in the code with a custom parameter name. The ${bizdate} parameter itself has no special meaning and is the same as other custom parameters.

image

3. Smoke testing

After you assign values to the scheduling parameters, you can run a smoke test during the publishing process. Configure a data timestamp to simulate the scheduling scenario for the target task. This verifies that the parameter replacement and code execution are as expected in that scenario. If they are not, you can make adjustments promptly to avoid affecting the normal scheduling of the task.

  1. Publish the node code.

    image

    1. Configure the scheduling time and scheduling dependencies.

    2. Click Save to save the code and configuration. Then, click the Publish button in the toolbar. You can start a smoke test during the publishing process.

      Note

      If the smoke test's code or parameters are not the latest, click Republish To Production.

  2. Run the smoke test.

    Click Start Smoke Test. In the Smoke Test dialog box, select a data timestamp and click OK.

  3. View the smoke test log.

    1. Click the image Smoke Test Records button in the menu on the left. In the list of smoke test records, click Log.

      image

    2. In the log, check the parameter output to confirm that it is as expected.

4. Publish and verify

After verification in the developer environment, you can submit and publish the task to Operation Center for production and automatic scheduling. After the task is published, check the scheduling parameters in the production environment to prevent runtime errors.

Note

If the scheduling parameter settings for an auto triggered task are not as expected, or if you cannot find the target task in Operation Center, confirm that the task was published successfully. For more information about how to publish a task, see Publish a task.

  1. Check the parameter definitions.

    Go to the Operation Center. Switch to the destination region and workspace. Go to the Auto Triggered Task O&M > Auto Triggered Tasks page. In the task list, click a task name to view the Properties panel and verify the runtime parameters.

    image

  2. Run a smoke test in Operation Center.

    You can also run a smoke test in Operation Center to confirm that a published task performs parameter replacement and code execution as expected in the production environment. For more information, see Run a test and view the test instance.

    Important

    A smoke test runs on real data in the production environment. Run it with caution to avoid corrupting data in the production database.

    image

  3. Observe the actual scheduling results.

    After the task is automatically scheduled, you can check the parameters in the Recurring Instance to further verify that they are replaced correctly.

    image

Complete configuration example

This topic uses a MaxCompute SQL node as an example. It shows how to use the smoke testing feature in the developer environment to test whether the configured scheduling parameters are as expected. It also shows how to view the scheduling parameter settings for the task in Operation Center after the task is published.

Note

For more information about scheduling parameter settings for different node types, see Examples of scheduling parameter settings for different node types.

  1. Edit the node code and configure the scheduling parameters.

    The following figure shows the code and scheduling parameter settings for the MaxCompute SQL node.image

    1. Define variables in the code.

      -- Assign system built-in parameters
      SELECT '${var1}';
      SELECT '${var2}';
      -- Assign custom parameters
      SELECT '${var3}';
      SELECT '${var4}';
      -- Assign a constant
      SELECT '${var5}';
    2. Assign values to the variables.

      In the Scheduling Settings > Scheduling Parameters section, assign values to the variables as shown in Area 2. For more information about value assignment formats, see Supported formats for scheduling parameters.

      • var1=$bizdate, which is the data timestamp in yyyymmdd format.

      • var2=$cyctime, which is the scheduled runtime of the task in yyyymmddhh24miss format.

      • var3=${yyyymmdd}, which is the data timestamp in yyyymmdd format.

      • var4=$[yyyymmddhh24miss], which is the scheduled runtime of the task in yyyymmddhh24miss format.

      • var5=Hangzhou, which sets the value of var5 to the constant Hangzhou.

    3. Optional: Configure the scheduling time.

      Configure the scheduling period for the MaxCompute SQL node to be hourly, as shown in Area 3.

      Note

      You can configure the time period as needed. This topic uses hourly scheduling as an example.

      • Scheduling start time is 16:00

      • Scheduling end time is 23:59

      • Scheduling interval is 1 hour.

      For more information about time period settings, see Time property configuration.

    4. Set scheduling dependencies.

      Configure scheduling dependencies for the developer node. For more information, see Configure scheduling dependencies. This example uses the root node as the ancestor node for this node.

  2. In the toolbar at the top of the node editor page, click the Save button and publish the settings for the MaxCompute SQL node.

  3. Start a smoke test during the publishing process.

    1. Click the Start Smoke Test button. In the Smoke Test dialog box, configure the business time to simulate the node's scheduling period.

      imageThe business time is configured as follows:

      • Data timestamp: 2025-10-16

      • Start time: 16:00

      • End time: 17:00

      If a MaxCompute SQL node is scheduled to run hourly, it generates two instances at 16:00 and 17:00 for the data timestamp 2025-10-17.

      Note

      Because the data timestamp is the day before the run date, the actual run date of the task is 2025-10-17.

      The expected values for the 16:00 instance are as follows:

      • var1=20251016.

      • var2=20251017160000.

      • var3=20251016.

      • var4=20251017160000.

      The expected values for the 17:00 instance are as follows:

      • var1=20251016.

      • var2=20251017170000.

      • var3=20251016.

      • var4=20251017170000.

    2. Click OK. The node starts running at the specified time.

    3. After the runtime ends, click the image Smoke Test Records button in the menu on the left to view the smoke test log.

      The two instances that are generated by the node run successfully, and the node's run result is as expected.

      image

      image

      image

  4. If the current workspace is in standard mode, you must publish the node to the production environment. On the MaxCompute SQL node editor page, click Publish in the upper-right corner of the menu bar. For more information about publishing a node, see Publish a node.

  5. Go to Operation Center to confirm the node's scheduling parameter settings.

    生产环境参数配置

    1. In the upper-right corner of the Operation Center menu bar, click DataStudio to open the Operation Center page.

    2. On the Auto Triggered Task O&M > Auto Triggered Tasks page, search for the target node.

      Note

      You can search for the node on the Auto Triggered Tasks page only after it is published successfully.

    3. Click the target node name. In the Properties panel, you can view the Runtime Parameters.

      In this example, the node's runtime parameters are var1=$bizdate var2=$cyctime var3=${yyyymmdd} var4=$[yyyymmddhh24miss], which is as expected.

    4. You can view the parameter replacement in an instance by viewing its runtime parameters. After a scheduled instance is generated, click the Recurring Instances menu, search for the task name, and click the task instance name. In the Properties panel, view the Runtime Parameters.

      In this example, the node's runtime parameters are var1=20251016 var2=20251017160000 var3=20251016 var4=20251017160000, which is as expected.

References