All Products
Search
Document Center

SchedulerX:Advanced parameters for job management

Last Updated:Mar 11, 2026

Advanced parameters control retry behavior, concurrency limits, execution history retention, and subtask distribution for SchedulerX jobs. Default values work for most workloads. Tune these parameters when:

  • Jobs fail due to transient errors such as worker restarts and need automatic recovery.

  • Distributed tasks require higher throughput or tighter concurrency control.

  • Worker resources constrain how many subtasks run in parallel.

SchedulerX organizes advanced parameters into two categories:

  • General parameters apply to all execution modes. They control job-level retry, concurrency, and history cleanup.

  • Distributed model parameters apply only to Visual MapReduce, MapReduce, and Shard run modes. They control per-worker concurrency, subtask retry and failover, and task distribution strategy.

General parameters

General parameters apply to every execution mode. Use them to configure automatic retry after job failures, limit concurrent execution, and manage execution history.

Parameter

Description

Default value

Task failure retry count

Number of automatic retries when a job fails. If a job is running on a worker and the worker restarts, the job fails. Set this parameter to rerun the job immediately after such failures. A value of 0 means no automatic retry.

0

Task failure retry interval

Seconds to wait between consecutive retries.

30

Task concurrency

Maximum number of instances that can run the same job at the same time. Set to 1 to prevent concurrent execution.

1

Cleaning strategy

Cleanup policy for job execution history.

Keep last N entries

Retained Number

Number of job execution records to keep.

300

Note

Retry applies to job-level failures such as a worker restart during execution.

Distributed model parameters

Distributed model parameters apply to Visual MapReduce, MapReduce, and Shard run execution modes. In these modes, a master node splits a job into subtasks and distributes them across worker nodes.

Understanding the task lifecycle helps clarify how these parameters interact:

  1. Distribution phase -- The master node generates subtasks and routes them to workers (controlled by distribution parameters).

  2. Execution phase -- Workers run assigned subtasks (controlled by concurrency and retry parameters).

  3. Failover phase -- If a worker stops, the system can redistribute its subtasks to other workers (controlled by the failover parameter).

Subtask concurrency and retry

These parameters control how many subtasks run in parallel on each worker and how failed subtasks are retried.

Parameter

Description

Default value

Number of single-machine concurrent subtasks

Number of subtasks that run concurrently on a single worker. Increase this value to speed up execution when workers have spare capacity. Decrease it if downstream systems or databases cannot handle the load.

5

Number of failed retries of subtasks

Number of automatic retries when a subtask fails.

0

Sub-task failure retry interval

Seconds to wait between consecutive subtask retries.

0

Failover and master node behavior

These parameters control how the system handles worker failures and whether the master node also runs subtasks.

Parameter

Description

Default value

Subtask Failover Strategy

When enabled, redistributes a failed subtask to a different worker after the original worker stops. Because a subtask may run more than once during failover, implement idempotence in your task logic to handle duplicate executions. Requires agent V1.8.13 or later.

--

The master node participates in the execution

When enabled, the master node runs subtasks in addition to coordinating distribution. At least two workers must be available. Disable this option when the job generates a very large number of subtasks so the master node can focus on coordination. Requires agent V1.8.13 or later.

--

Task distribution

SchedulerX supports two subtask distribution methods: push and pull. The method you choose determines how subtasks flow from the master node to worker nodes, and affects which additional parameters are available.

Distribution method and policy

Parameter

Description

Default value

Subtask distribution method

Controls how subtasks reach worker nodes. Push model: The master node distributes subtasks evenly to workers. Pull model: Each worker pulls subtasks from the master node. In the pull model, all pending subtasks are cached on the master node, which increases memory pressure. Do not exceed 10,000 subtasks per batch in pull mode to avoid memory bottlenecks.

Push model

distribution policy

Task distribution policy. Available only when the distribution method is Push model. Polling Scheme: Distributes an equal number of subtasks to each worker. Best when each worker processes subtasks in roughly the same amount of time. WorkerLoad optimal strategy: The master node detects worker loads and routes subtasks to less-loaded workers. Best when processing time varies significantly between workers. Applies to Visual MapReduce and MapReduce only. Requires agent V1.10.14 or later.

Polling Scheme

Distribution rate

Rate at which the master node distributes subtasks, measured in subtasks per second or per minute. Applies to Visual MapReduce and MapReduce only.

--

Pull model parameters

The following parameters are available only when Subtask distribution method is set to Pull model.

Parameter

Description

Default value

Number of subtasks pulled per time

Number of subtasks a worker node pulls from the master node in each request.

5

Subtask queue capacity

Size of the local queue that caches subtasks on each worker node.

10

Global concurrency of subtasks

Total number of subtasks that can run concurrently across all workers. Use this parameter to cap cluster-wide concurrency in pull mode.

1000