By default, a job that is created in SchedulerX supports up to 1,000 jobs. Your application may require more than 1,000 jobs in specific scenarios. In addition, different periodic jobs may be scheduled at different times in production environments. This topic describes how to configure an application to support more than 10 thousand or more than 100 thousand periodic jobs, and schedule these jobs at different times.
If you want to schedule each periodic job at a different time, you cannot use MapReduce distributed jobs. This is because the tasks of a MapReduce distributed job are scheduled at the same time.
- Configure IoT switches
You can configure Internet of Things (IoT) switches to periodically turn on or off IoT devices. Each IoT switch is turned on or off at a different time. In this scenario, you can create periodic standalone job for each IoT switch. However, this requires you to create 10 thousand or 100 thousand standalone jobs.
- Configure business monitoring
To configure business monitoring, you must create alert rules. Typically, the system matches the monitoring data against the alert rules at an interval of 1 minute. If the data matches an alerting rule, an alert is generated. In most cases, you can create a MapReduce distributed job to resolve this issue. However, alert rules can be simple or complex and the system requires more time to handle a complex alert rule. As a result, the system may need to wait for the completion of a specific task before the system can trigger the job again. To avoid this issue, you must create a periodic standalone job for each alert rule. If you want to monitor large-scale business, you may need to create 10 thousand or even 100 thousand standalone jobs.
- Use SchedulerX as a base
You develop a job scheduling platform based on SchedulerX for your enterprise and provide a PoP API that is used to create jobs. In this scenario, the employees in the enterprise may create 10 thousand or even 100 thousand jobs.
To configure an application to support more than 100 thousand periodic jobs, perform the following steps:
- Contact the SchedulerX technical support to enable auto scaling for the application. After auto scaling is enabled for the application, when the number of jobs for the application reaches the upper limit 1,000, the system triggers auto scaling and creates a sub-application.
- Use the agent version 1.2.1 or later to connect to SchedulerX and enable the shared container pool feature. In this topic, a Spring Boot application is used as an example. For more information about how to connect the Spring Boot application to SchedulerX, see Connect a Spring Boot application to SchedulerX. For more information about how to connect other types of applications to SchedulerX, see the relevant topics in .
- Add the agent dependency to the pom.xml file of the application. The version of the agent must be 1.2.1 or later. Agent versions earlier than 1.2.1 do not support the shared container pool feature. Make sure that the version of the agent is 1.2.1 or later, or update the agent to 1.2.1 or later.
- Add the shared container pool configurations to the configuration file of the application.
spring.schedulerx2.shareContainerPool=true #Enable all jobs to share the container pool. spring.schedulerx2.sharePoolSize=128 #Specify the size of the container pool.Note If the shared container pool feature is disabled, the system creates a container pool each time it triggers a job. As a result, the agent may be overloaded and cannot work as normal.
- Add the agent dependency to the pom.xml file of the application. The version of the agent must be 1.2.1 or later.