All Products
Search
Document Center

Microservices Engine:Scale a SchedulerX application to 100,000+ periodic jobs

Last Updated:Mar 11, 2026

Each SchedulerX application supports up to 1,000 jobs by default. When your workload requires tens of thousands of individually scheduled periodic jobs -- each with its own trigger time -- this limit blocks deployment. SchedulerX provides auto scaling and a shared container pool to remove the 1,000-job cap without overloading the agent.

When you need individual periodic jobs

Each periodic job in SchedulerX runs on its own schedule. MapReduce distributed jobs cannot replace individual periodic jobs because all tasks within a MapReduce job execute at the same time. The following scenarios typically require 10,000 to 100,000+ standalone jobs:

ScenarioWhy individual jobs are neededScale
IoT device controlEach Internet of Things (IoT) switch turns a device on or off at a different time. One standalone job per switch.10,000--100,000+ jobs
Alert rule evaluationA monitoring system evaluates alert rules once per minute. Complex rules take longer and can block the next trigger cycle. One standalone job per rule prevents slow rules from delaying others.10,000--100,000+ jobs
Enterprise scheduling platformA team builds an internal scheduling platform on top of SchedulerX and exposes a PoP API for job creation. As adoption grows, the total job count reaches 100,000+.100,000+ jobs

How auto scaling and the shared container pool work

Two mechanisms work together to support high job counts:

Auto scaling -- When the number of jobs for an application reaches the upper limit of 1,000, SchedulerX automatically creates a sub-application. This distributes jobs across multiple logical applications while keeping them managed together. You do not need to manage sub-applications manually.

Shared container pool -- By default, each job trigger allocates its own container pool for execution. At 10,000+ jobs, this rapidly exhausts agent memory and CPU, causing the agent to become unresponsive. Enabling the shared container pool allows all jobs to share a single container pool, so resource consumption stays bounded regardless of job count.

Without shared pool (default):          With shared pool:
Job 1 trigger → Container pool 1       Job 1 trigger ─┐
Job 2 trigger → Container pool 2       Job 2 trigger ──┤
Job 3 trigger → Container pool 3       Job 3 trigger ──┼→ Shared container pool (size: 128)
  ...               ...                  ...            │
Job N trigger → Container pool N       Job N trigger ─┘
  → Agent memory exhausted               → Bounded resource usage

Enable auto scaling and the shared container pool

This procedure uses a Spring Boot application as an example. For other application types, see Quick start > Connect an agent to SchedulerX.

Step 1: Request auto scaling

Auto scaling is not self-service. Contact SchedulerX technical support to enable it for your application.

After auto scaling is enabled, SchedulerX creates a sub-application each time the job count for an application reaches 1,000.

Step 2: Verify the agent version

Open your pom.xml file and confirm the SchedulerX agent dependency version is 1.2.1 or later.

Important

Agent versions earlier than 1.2.1 do not support the shared container pool. If your version is older, update the version number in your dependency declaration before proceeding.

Step 3: Enable the shared container pool

Add the following properties to your Spring Boot configuration file (for example, application.properties):

# Enable the shared container pool so all jobs reuse a single pool
spring.schedulerx2.shareContainerPool=true

# Set the size of the container pool
spring.schedulerx2.sharePoolSize=128
PropertyDescription
spring.schedulerx2.shareContainerPoolWhen set to true, all jobs share one container pool instead of each job creating its own pool.
spring.schedulerx2.sharePoolSizeSpecifies the size of the shared container pool. The example value is 128.
Important

Without the shared container pool enabled, each job trigger creates its own container pool. At 10,000+ jobs, this rapidly exhausts agent memory and CPU, causing the agent to become unresponsive.

Production best practices

Tune the pool size

Start with a pool size of 128 as shown in the configuration example and monitor agent CPU and memory usage. Adjust based on your job characteristics and resource consumption.

Stagger job schedules

Avoid clustering trigger times around the same second. When thousands of jobs share the same CRON expression (for example, 0 * * * * ? for every minute at second 0), they all fire simultaneously and create burst load on the agent.

Distribute CRON expressions across different seconds:

# Instead of all jobs at second 0:
0 * * * * ?    # All 10,000 jobs fire at :00

# Spread across 60 seconds:
0 * * * * ?    # Jobs 1-167 fire at :00
1 * * * * ?    # Jobs 168-334 fire at :01
2 * * * * ?    # Jobs 335-501 fire at :02
...
59 * * * * ?   # Jobs 9834-10000 fire at :59

This distributes the load more evenly across time and reduces peak concurrent load on the agent.

Monitor sub-application creation

Check the SchedulerX console periodically to confirm that sub-applications are created as expected when the job count grows.

Handle job execution failures

At high job counts, individual job failures are expected. Design your jobs to be idempotent so that retries do not produce duplicate side effects. Configure appropriate timeout values to prevent slow jobs from blocking the shared container pool.

Quotas and limits

ItemValueNotes
Jobs per application (before auto scaling)1,000Use auto scaling to exceed this limit
Minimum agent version for shared container pool1.2.1Earlier versions do not support this feature
Shared container pool size (example configuration)128Configurable via sharePoolSize property
To enable auto scaling and increase the total job count beyond 1,000, contact SchedulerX technical support.

Related topics