Queue triggers
- You can create a single queue trigger for a queue. This also applies to queues that are shared to multiple folders.
- Trigger time zones: The time zone set on a trigger is not restricted by the time zone of the tenant. However, if you use non-working days calendars, you cannot set different time zones. Queue triggers are launched according to the time zone defined on the trigger level. Queue triggers are launched based on queue items processing. Queue triggers are disabled based on the trigger time zone.
Queue triggers instantly start a process upon the trigger creation or whenever you add a new item to a queue. The trigger runs in the environment associated with the selected process.
When a large number of queue items are added in a short time (for example, using AddQueueItem or BulkAddQueueItems), the process may not start immediately. To handle such situations, there is a recheck mechanism implemented to ensure the process is triggered once resources are available.
The implementation of queue triggers is optimized for consuming processes that have an internal loop to process all the available queue items before exiting. If a process does not implement this strategy then the resulting experience will be suboptimal and might not meet the desired business requirements.
These options help you parameterize the rules for process triggering:
| Description | |
|---|---|
| Minimum number of items to trigger the first job | The item-processing job is only started after the targeted queue has at least this number of new items. Deferred queue items are not counted. |
| Maximum number of pending and running jobs allowed simultaneously | The maximum number of allowed pending and running jobs, counted together. For 2 or more jobs allowed simultaneously, the third option needs to be defined as described below. |
| Another job is triggered for each __ new item(s). | A new job is triggered for each number of new items added on top of the number of items defined for the first option. Only enabled if there are 2 or more jobs allowed simultaneously (defined using the option described above). |
| After completing jobs, reassess conditions and start new jobs if possible | If selected, the queue trigger is evaluated after each job completion, and new jobs are started if robots are available. This complements the automatic check that occurs every 30 minutes, and helps ensure that remaining queue items are processed with no lags when possible. |
To handle queue items that cannot be processed at the moment they are enqueued, including retried items, once every 30 minutes, a check for unprocessed items is performed by default, and if the triggering condition is met, the trigger is launched once again.
You can use the Queues - Unprocessed queue items check frequency (minutes) parameter to adjust the default 30-minute check interval.
This check ensures all items in the queue are processed in the following situations:
- Queue items are added to the queue much faster than they can be processed with the available resources.
- Queue items are added to a queue during non-working days, but they can only be processed during working hours.
- Queue items processing is postponed for a later time. After that time has elapsed, they are ready to be processed once they have been identified by the 30-minute check.
note
Due to the default 30-minute check, there's a risk of resource obstruction during non-business hours. To avoid this, make sure there are no unprocessed items at the end of the working day. If that is not a possibility, ensure the triggered process does not require human intervention.
Queue trigger processing algorithm
Variables
| Variable | Description |
|---|---|
newItems | Number of new queue items available in the queue. |
minItemsToTrigger | Minimum number of items required to trigger the first job. The first job starts only when there are at least this many new items. |
maxConcurrentJobs | Maximum number of pending and running jobs allowed simultaneously. This is the ceiling on parallel jobs. |
itemsPerJob | Number of additional items needed to trigger each subsequent job. When minItemsToTrigger is reached, one job starts. For every additional itemsPerJob items beyond minItemsToTrigger, one more job starts—up to maxConcurrentJobs. |
pendingJobs | Number of jobs currently in Pending state. |
runningJobs | Number of jobs in Resumed, Running, Stopping, or Terminating states. |
enablePendingJobsStrategy | Boolean setting that determines whether running jobs are counted against the remaining capacity. |
The Triggers - Queue triggers - Enable pending jobs strategy setting determines how Orchestrator calculates remaining capacity—the number of additional jobs it is allowed to schedule:
- True – Remaining capacity =
maxConcurrentJobsminuspendingJobs. Use this setting when running jobs are expected to have already claimed their queue items from New status. - False – Remaining capacity =
maxConcurrentJobsminuspendingJobsminusrunningJobs. Use this setting when running jobs are expected to have not yet claimed their queue items from New status.
Orchestrator schedules the smaller of two values: the remaining capacity and the number of desired jobs based on queue item counts. The setting therefore controls how conservatively or aggressively new jobs are scheduled.
Formulas
1. Remaining capacity
if enablePendingJobsStrategy = true:
remainingCapacity = maxConcurrentJobs - pendingJobs
if enablePendingJobsStrategy = false:
remainingCapacity = maxConcurrentJobs - pendingJobs - runningJobs
2. Desired jobs (before capacity limit)
if newItems < minItemsToTrigger:
desiredJobs = 0
else:
desiredJobs = 1 + (newItems - minItemsToTrigger) / itemsPerJob [integer division]
3. Jobs to schedule
jobsToSchedule = min(desiredJobs, remainingCapacity)
Important notes
- This evaluation happens whenever a single queue item is added, including through bulk add.
- In order to ensure postponed (deferred) queue items are taken into account, every queue trigger has an associated schedule that rechecks the entire algorithm above. This happens by default every 30 minutes, but can be lowered to a minimum of 10 via the Queues - Unprocessed queue items check frequency (minutes) tenant setting.
note
Postponed items are only processed once their postpone time has elapsed. The built-in job that checks postponed items runs every 30 minutes. However, if a new item is added to the same queue after a postponed item has already become available, the queue trigger is retriggered immediately, and the postponed item may be picked up without waiting for the next scheduled check.
- The algorithm is designed to ensure a job starts once a threshold is reached, and that when the threshold is surpassed, additional jobs start to help process the increased backlog. It is not designed to distribute workload uniformly across machines, but to ensure that enough jobs are present.
- There is no hard link between the jobs started and the queue items they process. Job J is not necessarily assigned to queue items a, b, or c.
- Algorithm outcomes differ depending on whether queue items were added in bulk or individually, as this influences the number of evaluations performed.
- When using queue triggers, you may encounter the following alert:
The trigger could not create a job as the maximum number of jobs has been reached.This alert is informational and usually means a job was already running when Orchestrator tried to start another one. If you are comfortable with your current job capacity, you can safely ignore it.
Example
Scenario 1 - queue items added individually
For this scenario, the Enable pending jobs strategy parameter is set to False. For more information on how to update the value, see Tenant Settings.
Two jobs are used in this scenario:
- One adds 3 items per second for 20 seconds to the targeted queue (60 items total).
- One processes 1 item per second from the targeted queue.
The trigger is configured as follows:
- Minimum number of items to trigger the first job:
31 - Maximum number of pending and running jobs allowed simultaneously:
3 - Another job is triggered for each:
10new item(s)
After the job that adds items to the queue starts:
- After 11 seconds (33 items), the first item-processing job is triggered.
- After another 4 seconds (12 items), the second item-processing job is triggered.
- After another 4 seconds (12 items), the third item-processing job is triggered.
By the time queue item addition ended, the first job had processed 9 items, the second 5 items, and the third 1 item—15 items in 20 seconds processed by three jobs.
The remaining 45 items (60 − 15) are processed by 3 jobs at 1 item per second each, completing in another 15 seconds. Total processing time: 35 seconds.
Scenario 2 - queue items added in bulk
For this scenario, the Enable pending jobs strategy parameter is set to False. For more information on how to update the value, see Tenant Settings.
If the 60 queue items from Scenario 1 are added with one bulk operation (when no job is running or pending), 3 jobs are created.
If at least one job finishes before the reevaluation schedule, further jobs are created.
Enable pending jobs strategy examples
These examples show how the Enable pending jobs strategy setting can cause over-scheduling when enabled and under-scheduling when disabled.
Trigger configuration
| Setting | Value |
|---|---|
| Minimum queue items to trigger | 1 |
| Maximum number of pending and running jobs | 1,000 |
| Another job triggered for each | 1 new item |
| After completing job, reassess | True |
| Enable pending jobs strategy | True (Part 1) |
Assumption: It takes 30 seconds for a job to move a queue item out of New status.
Part 1: Over-scheduling with Enable pending jobs strategy enabled
Step 1: 1,100 items are added to the queue in bulk, triggering 1,000 jobs.
Step 2: Only 200 robots are available. 200 jobs run and 800 remain pending.
| Jobs | Count |
|---|---|
| Running | 200 |
| Pending | 800 |
| Queue items | Status |
|---|---|
| 200 | In Process |
| 900 | New |
Step 3: The 200 running jobs complete, triggering 200 more jobs to run.
| Jobs | Count |
|---|---|
| Running | 200 |
| Pending | 600 |
| Queue items | Status |
|---|---|
| 200 | Successful |
| 200 | In Process |
| 700 | New |
Step 4: Because After completing job, reassess is enabled, the trigger runs again within seconds. With Enable pending jobs strategy enabled, Orchestrator assumes that all 200 running jobs have already moved their queue items out of New status—even though this actually takes 30 seconds.
Step 5: The trigger calculation at this point:
remainingCapacity = maxConcurrentJobs - pendingJobs = 1000 - 600 = 400
desiredJobs = newItems - pendingJobs = 700 - 600 = 100
jobsToSchedule = min(100, 400) = 100
100 more jobs are scheduled. However, because the 200 running jobs have not yet moved their items out of New status (it takes 30 seconds), Orchestrator is treating 700 New items as uncovered when only ~500 truly need new jobs. The result is approximately 100 over-scheduled jobs.
Orchestrator does not track the relationship between individual jobs and individual queue items, so it cannot detect over-scheduling on its own. Identifying over-scheduling requires external state tracking.
Step 6: With Enable pending jobs strategy disabled, the same situation produces:
remainingCapacity = maxConcurrentJobs - pendingJobs - runningJobs = 1000 - 600 - 200 = 200
desiredJobs = newItems - pendingJobs - runningJobs = 700 - 600 - 200 = -100 → 0
jobsToSchedule = min(0, 200) = 0
No additional jobs are scheduled—which is closer to correct in this scenario.
Part 2: Under-scheduling with Enable pending jobs strategy disabled
Step 1: In this scenario, jobs do not finish simultaneously. Of the initial 200 jobs, 100 complete after 60 seconds and the other 100 complete after 90 seconds.
Step 2: The first 100 jobs complete, triggering 100 more jobs.
| Jobs | Count |
|---|---|
| Running | 200 |
| Pending | 700 |
| Queue items | Status |
|---|---|
| 100 | Successful |
| 200 | In Process |
| 700 | New |
Step 3: Because After completing job, reassess is enabled, the trigger runs again within seconds.
Step 4: With Enable pending jobs strategy enabled:
remainingCapacity = maxConcurrentJobs - pendingJobs = 1000 - 700 = 300
desiredJobs = newItems - pendingJobs = 700 - 700 = 0
jobsToSchedule = min(0, 300) = 0
No additional jobs are scheduled. This is correct—there are 700 pending jobs for 700 New items.
Step 5: With Enable pending jobs strategy disabled:
remainingCapacity = maxConcurrentJobs - pendingJobs - runningJobs = 1000 - 700 - 200 = 100
desiredJobs = newItems - pendingJobs - runningJobs = 700 - 700 - 200 = -200 → 0
jobsToSchedule = min(0, 100) = 0
No additional jobs are scheduled here either. However, if there were fewer pending jobs, the formula would under-schedule: it assumes the 200 running jobs have not yet claimed their items from New status, even though 100 of them already have.
Step 6: Jobs not scheduled due to under-scheduling are eventually picked up when the Unprocessed queue items check runs on its periodic schedule. This is the purpose of that check.
Summary
| Setting | Assumption about running jobs | Consequence |
|---|---|---|
| Enabled (true) | Have already claimed their queue items | May over-schedule |
| Disabled (false) | Have not yet claimed their queue items | May under-schedule (mitigated by periodic re-checks) |
Execution target
You can configure several rules depending on which the associated processes get executed.
| Description |
|
|---|---|
| Account |
The process is executed under a specific account. Specifying only the account results in Orchestrator allocating the machine dynamically. Specifying both the account and the machine template means the job launches on that very account-machine pair. |
| Machine |
The process is executed on one of the host machines attached to the selected machine template. Specifying only the machine template results in Orchestrator allocating the account dynamically. Specifying both the account and the machine template means the job launches on that very account-machine pair. Note: Make sure the required runtime licenses to execute the job are allocated to the associated machine template. |
| Hostname |
Hostname After selecting a machine template, the Hostname option is displayed, allowing you to select the desired workstation/robot session to execute the process. All available sessions in the active folder are displayed, either unconnected, disconnected, or connected. Note: Make sure the required runtime licenses to execute the job are allocated to the associated machine template. |
Queue triggers created using UiPath Activities
Queue triggers can also be created by RPA developers at design-time in Studio, using the When New Item Added to Queue activity of the UiPath.Core.Activities package.
Orchestrator identifies these types of triggers as package requirements, and the only way to add them in Orchestrator is from the Package Requirements page.
Any configuration set at design-time is reflected in Orchestrator and cannot be modified.
For example: When a queue item is added to my queue, I want to receive its metadata as a log message.The difference here is that the queue trigger instructs the automation to start from the inside of the workflow, as opposed to Orchestrator queue triggers, which instructs the automation to start from the outside of the workflow.