All Products
Search
Document Center

SchedulerX:Java jobs

Last Updated:Mar 11, 2026

SchedulerX runs Java jobs inside your application processes. You implement a processor class, configure a schedule, and SchedulerX handles triggering, distribution, and lifecycle management across your worker nodes.

Execution modes

Choose an execution mode based on how many workers should run the job and whether you need parallel task processing.

ModeWorkersProcessor classUse case
Standalone operationOne random worker from the groupId groupJavaProcessorSingle-node jobs: report generation, database cleanup, scheduled emails
Broadcast runAll workers in the groupId group simultaneously; waits for every worker to finishJavaProcessorOperations that must run on every node: cache refresh, configuration sync, local file cleanup
Visual MapReduceDistributed across workers. Requires Professional Edition. Maximum 1,000 tasks.MapJobProcessorSmaller batch jobs where you need to query detailed execution records, operational logs, and stacks of tasks by keyword
MapReduceDistributed across workers. Only running information is queryable.MapJobProcessorLarge-scale parallel processing. Recommended when the number of tasks is less than 1,000,000.
Shard runSplit across static and dynamic shardsMapJobProcessorHigh-volume data processing that benefits from shard-based parallelism

Set the processor class path

Set the processor class path to the fully qualified class name of your implementation:

com.apache.armon.test.schedulerx.processor.MySimpleJob

SchedulerX resolves the processor class differently depending on whether you upload a JAR package:

Deployment methodBehaviorWhen to republish
Without a JAR packageSchedulerX searches the classpath of your application processRecompile and republish each time you modify the job
With a JAR packageSchedulerX dynamically loads the processor from the uploaded JARNo republish needed after job changes

Implement a JavaProcessor

Use JavaProcessor for standalone operation and broadcast run modes. Override the process method and optionally implement lifecycle hooks.

MethodRequiredDescription
process(JobContext context) throws ExceptionYesMain job logic. Returns a ProcessResult to indicate success or failure.
preProcess(JobContext context) throws ExceptionNoRuns before process. Use this to initialize or reset state.
postProcess(JobContext context)NoRuns after process completes. Use this for cleanup.
kill(JobContext context)NoCalled when a running job is manually stopped. Set a flag to interrupt your processing loop.
Important

If your job runs as a Spring Singleton bean, you must reset mutable state in preProcess. Otherwise, a flag set by a previous kill call prevents the next execution from running. See the terminable job example below.

Basic job

A minimal JavaProcessor that logs a message and returns success:

@Component
public class MyProcessor1 extends JavaProcessor {

    @Override
    public ProcessResult process(JobContext context) throws Exception {
        System.out.println("Hello, schedulerx2.0!");
        return new ProcessResult(true);
    }
}

Terminable job

A JavaProcessor that supports graceful shutdown through the kill method. When SchedulerX calls kill, the stop flag breaks the processing loop.

@Component
public class MyProcessor2 extends JavaProcessor {
    private volatile boolean stop = false;

    @Override
    public ProcessResult process(JobContext context) throws Exception {
        int N = 10000;
        while (!stop && N >= 0) {
            //TODO
            N--;
        }
        return new ProcessResult(true);
    }

    @Override
    public void kill(JobContext context) {
        stop = true;
    }

    @Override
    public void preProcess(JobContext context) {
        // Reset the flag for Singleton beans -- without this,
        // a previous kill signal would block the next run.
        stop = false;
    }
}

Implement a MapJobProcessor

Use MapJobProcessor for Visual MapReduce, MapReduce, and shard run modes. In addition to process, implement the map method to distribute tasks across workers.

MethodRequiredDescription
process(JobContext context) throws ExceptionYesHandles both root-level dispatch and task processing. Call isRootTask(context) to determine which role the current invocation plays.
map(List<? extends Object> taskList, String taskName)YesDistributes a list of tasks to workers. Call this from process when handling the root task.
postProcess(JobContext context)NoRuns after all tasks finish. Use this for aggregation or final cleanup.
kill(JobContext context)NoCalled when the job is stopped.

Distributed batch processing with the Map model

The following MapJobProcessor scans a database table in parallel:

  1. The root task queries the min and max IDs from the target table.

  2. It splits the ID range into pages and calls map(taskList, "Level1Dispatch") to distribute them across workers.

  3. Each worker receives a PageTask with a specific ID range to process.

  4. After every page task finishes, SchedulerX calls postProcess for final aggregation or cleanup.

@Component
public class ScanSingleTableJobProcessor extends MapJobProcessor {
    private static final int pageSize = 100;

    static class PageTask {
        private int startId;
        private int endId;

        public PageTask(int startId, int endId) {
            this.startId = startId;
            this.endId = endId;
        }

        public int getStartId() {
            return startId;
        }

        public int getEndId() {
            return endId;
        }
    }

    @Override
    public ProcessResult process(JobContext context) {
        String taskName = context.getTaskName();
        Object task = context.getTask();

        if (isRootTask(context)) {
            System.out.println("start root task");
            Pair<Integer, Integer> idPair = queryMinAndMaxId();
            int minId = idPair.getFirst();
            int maxId = idPair.getSecond();

            List<PageTask> taskList = Lists.newArrayList();
            int step = (int) ((maxId - minId) / pageSize); //Calculate the number of pages.
            for (int i = minId; i < maxId; i += step) {
                taskList.add(new PageTask(i, (i + step > maxId ? maxId : i + step)));
            }
            return map(taskList, "Level1Dispatch"); //The process calls the map method to distribute tasks.

        } else if (taskName.equals("Level1Dispatch")) {
            PageTask record = (PageTask) task;
            long startId = record.getStartId();
            long endId = record.getEndId();
            //TODO
            return new ProcessResult(true);
        }

        return new ProcessResult(true);
    }

    @Override
    public void postProcess(JobContext context) {
        //TODO
        System.out.println("all tasks is finished.");
    }

    private Pair<Integer, Integer> queryMinAndMaxId() {
        //TODO select min(id),max(id) from xxx
        return null;
    }
}

ProcessResult reference

Every process method must return a ProcessResult that tells SchedulerX whether the job succeeded.

Return valueMeaning
new ProcessResult(true)Job succeeded.
new ProcessResult(false, errorMsg)Job failed. Throwing an exception has the same effect.
new ProcessResult(true, result)Job succeeded with a result string.
Important

The result string must not exceed 1,000 bytes.