In job scheduling, you can configure two policies to perform application-level throttling: enable throttling for an application and specify a queue size or configure a queue that supports job preemption based on priorities. This helps ensure the stability of the scheduling system and the timeliness of mission-critical jobs. This topic describes how to manage application-level resources in an efficient manner and schedule jobs based on priorities.
Scenarios
During peak periods of job scheduling, the system handles heavy workloads. For example, if a large number of daily jobs start at the same time, the backend system may fail to handle concurrent workloads without effective scheduling measures. This may cause a system crash. To resolve the issue, a queue mechanism is introduced to control the maximum number of running jobs at the same time for the application. The queue mechanism gradually allocates and schedules jobs to ensure efficient use of system resources and stable application operation.
Configure policies for application-level throttling
Enable throttling for an application and specify a queue size
When you create and edit an application group, turn on Flow Control in the Advanced Configuration section. By default, Flow Control is turned off. For information about how to create an application group, see Create an application group.
After you turn on Flow Control, you can configure the Number of concurrent task instances parameter. This parameter specifies the maximum number of jobs that can run at the same time for the application. If the number of jobs exceeds the value of this parameter, excess jobs wait in the queue and are not discarded.

Create three jobs in the application group and click Run once in the Operation column of each job.

In the left-side navigation pane, click Instances. On the page that appears, click the Task instance List tab. On this tab, you can view that the first triggered hello_jobA job is running. The hello_jobB and hello_jobC jobs wait in the pool.
After the hello_jobA job is executed, the hello_jobB job enters the execution queue.
Configure a queue that supports job preemption based on priorities
The following figure shows the priority-based queue mechanism of Yet Another Resource Negotiator (YARN). The queue mechanism isolates resources for jobs with different priorities.

The following procedure shows how SchedulerX uses application-level throttling together with task priorities to configure a queue that supports task preemption based on priorities.
Each job can have a priority. In an application, if multiple jobs are submitted for scheduling at the same time, the jobs with higher priorities are scheduled first.

Enable throttling for the application named dts-all.hxm, set the Number of concurrent task instances parameter to 1 to facilitate monitoring, and then create three jobs that are assigned with high, medium, and low priorities. Trigger the jobs once in the following order: medium priority, low priority, and high priority.

In the left-side navigation pane, click Instances. On the page that appears, click the Task instance List tab. On this tab, you can view that when the job with a medium priority is triggered, no job is in the queue. In this case, the job with a medium priority is executed first.
After the job with a medium priority is executed, an idle slot is available in the queue. In this case, the job with a high priority preempts the job with a low priority. The job with a high priority is executed before the job with a low priority.
F&Q
Can I specify high priorities for my jobs to ensure that my jobs are preferentially scheduled before the jobs of other users?
Job priority settings take effect only within a specific application and do not affect other applications.
Can I use the queue mechanism and throttling for minute-level jobs?
You can use the queue mechanism and throttling in scenarios in which burst peaks for job scheduling occur. If the scheduling workloads per minute are heavy, we recommend that you do not use the queue mechanism and throttling to prevent queue buildup. You can perform client-side throttling or scale the processing capability of the client for minute-level jobs.