SchedulerX runs Java jobs inside your application processes. You implement a processor class, configure a schedule, and SchedulerX handles triggering, distribution, and lifecycle management across your worker nodes.
Execution modes
Choose an execution mode based on how many workers should run the job and whether you need parallel task processing.
| Mode | Workers | Processor class | Use case |
|---|---|---|---|
| Standalone operation | One random worker from the groupId group | JavaProcessor | Single-node jobs: report generation, database cleanup, scheduled emails |
| Broadcast run | All workers in the groupId group simultaneously; waits for every worker to finish | JavaProcessor | Operations that must run on every node: cache refresh, configuration sync, local file cleanup |
| Visual MapReduce | Distributed across workers. Requires Professional Edition. Maximum 1,000 tasks. | MapJobProcessor | Smaller batch jobs where you need to query detailed execution records, operational logs, and stacks of tasks by keyword |
| MapReduce | Distributed across workers. Only running information is queryable. | MapJobProcessor | Large-scale parallel processing. Recommended when the number of tasks is less than 1,000,000. |
| Shard run | Split across static and dynamic shards | MapJobProcessor | High-volume data processing that benefits from shard-based parallelism |
Set the processor class path
Set the processor class path to the fully qualified class name of your implementation:
com.apache.armon.test.schedulerx.processor.MySimpleJobSchedulerX resolves the processor class differently depending on whether you upload a JAR package:
| Deployment method | Behavior | When to republish |
|---|---|---|
| Without a JAR package | SchedulerX searches the classpath of your application process | Recompile and republish each time you modify the job |
| With a JAR package | SchedulerX dynamically loads the processor from the uploaded JAR | No republish needed after job changes |
Implement a JavaProcessor
Use JavaProcessor for standalone operation and broadcast run modes. Override the process method and optionally implement lifecycle hooks.
| Method | Required | Description |
|---|---|---|
process(JobContext context) throws Exception | Yes | Main job logic. Returns a ProcessResult to indicate success or failure. |
preProcess(JobContext context) throws Exception | No | Runs before process. Use this to initialize or reset state. |
postProcess(JobContext context) | No | Runs after process completes. Use this for cleanup. |
kill(JobContext context) | No | Called when a running job is manually stopped. Set a flag to interrupt your processing loop. |
If your job runs as a Spring Singleton bean, you must reset mutable state in preProcess. Otherwise, a flag set by a previous kill call prevents the next execution from running. See the terminable job example below.
Basic job
A minimal JavaProcessor that logs a message and returns success:
@Component
public class MyProcessor1 extends JavaProcessor {
@Override
public ProcessResult process(JobContext context) throws Exception {
System.out.println("Hello, schedulerx2.0!");
return new ProcessResult(true);
}
}Terminable job
A JavaProcessor that supports graceful shutdown through the kill method. When SchedulerX calls kill, the stop flag breaks the processing loop.
@Component
public class MyProcessor2 extends JavaProcessor {
private volatile boolean stop = false;
@Override
public ProcessResult process(JobContext context) throws Exception {
int N = 10000;
while (!stop && N >= 0) {
//TODO
N--;
}
return new ProcessResult(true);
}
@Override
public void kill(JobContext context) {
stop = true;
}
@Override
public void preProcess(JobContext context) {
// Reset the flag for Singleton beans -- without this,
// a previous kill signal would block the next run.
stop = false;
}
}Implement a MapJobProcessor
Use MapJobProcessor for Visual MapReduce, MapReduce, and shard run modes. In addition to process, implement the map method to distribute tasks across workers.
| Method | Required | Description |
|---|---|---|
process(JobContext context) throws Exception | Yes | Handles both root-level dispatch and task processing. Call isRootTask(context) to determine which role the current invocation plays. |
map(List<? extends Object> taskList, String taskName) | Yes | Distributes a list of tasks to workers. Call this from process when handling the root task. |
postProcess(JobContext context) | No | Runs after all tasks finish. Use this for aggregation or final cleanup. |
kill(JobContext context) | No | Called when the job is stopped. |
Distributed batch processing with the Map model
The following MapJobProcessor scans a database table in parallel:
The root task queries the min and max IDs from the target table.
It splits the ID range into pages and calls
map(taskList, "Level1Dispatch")to distribute them across workers.Each worker receives a
PageTaskwith a specific ID range to process.After every page task finishes, SchedulerX calls
postProcessfor final aggregation or cleanup.
@Component
public class ScanSingleTableJobProcessor extends MapJobProcessor {
private static final int pageSize = 100;
static class PageTask {
private int startId;
private int endId;
public PageTask(int startId, int endId) {
this.startId = startId;
this.endId = endId;
}
public int getStartId() {
return startId;
}
public int getEndId() {
return endId;
}
}
@Override
public ProcessResult process(JobContext context) {
String taskName = context.getTaskName();
Object task = context.getTask();
if (isRootTask(context)) {
System.out.println("start root task");
Pair<Integer, Integer> idPair = queryMinAndMaxId();
int minId = idPair.getFirst();
int maxId = idPair.getSecond();
List<PageTask> taskList = Lists.newArrayList();
int step = (int) ((maxId - minId) / pageSize); //Calculate the number of pages.
for (int i = minId; i < maxId; i += step) {
taskList.add(new PageTask(i, (i + step > maxId ? maxId : i + step)));
}
return map(taskList, "Level1Dispatch"); //The process calls the map method to distribute tasks.
} else if (taskName.equals("Level1Dispatch")) {
PageTask record = (PageTask) task;
long startId = record.getStartId();
long endId = record.getEndId();
//TODO
return new ProcessResult(true);
}
return new ProcessResult(true);
}
@Override
public void postProcess(JobContext context) {
//TODO
System.out.println("all tasks is finished.");
}
private Pair<Integer, Integer> queryMinAndMaxId() {
//TODO select min(id),max(id) from xxx
return null;
}
}ProcessResult reference
Every process method must return a ProcessResult that tells SchedulerX whether the job succeeded.
| Return value | Meaning |
|---|---|
new ProcessResult(true) | Job succeeded. |
new ProcessResult(false, errorMsg) | Job failed. Throwing an exception has the same effect. |
new ProcessResult(true, result) | Job succeeded with a result string. |
The result string must not exceed 1,000 bytes.