User Tools

Site Tools


replicator_snc_multiple_multioutput_jobs

Multiple MultiOutput Jobs

Overview

Your Outbound Messages are sent out by a single job Perspectium MultiOutput Processing which goes to your Outbound Messages table and sends the messages out per queue. This should cover most cases.

However, if you are doing a high volume of messages to a single queue or spreading your messages across a high volume of queues than you can take advantage of the following feature.

The core concept behind this is the ability to pass in an encoded query to the MultiOutput job to limit the scope of these jobs. In other words you can have multiple jobs responsible for their own unique subset of Outbound Messages.

Note: we do not recommend for you to simply clone the default Perspectium MultiOutput Processing without making the following changes. Doing so can cause you to send the same set of messages out multiple times.


First Steps


This is a feature introduced in V3.22.0. If you are interested in this feature it is required that you upgrade to this version.

We also recommend that you take a quick look at the Perspectium MultiOutput Processing job to familiarize yourself with it and contact support@perspectium.com to validate your work if necessary.


Strategies


There are two main strategies behind this process. The one you use will depend on your use. The details for the implementation for each are covered in the following section.

Bulk Processing on a Queue

The Bulk Processing on a Queue method is not suggested for dynamic shares as messages will not be in order

This refers to wanting to process a high volume of messages on a specific queue. If you are Bulk Sharing millions of messages for a single queue than this is the path you should lean towards.

This is to set up the Sharing to divide the work for a queue into smaller distinct chunks - and having multiple jobs each process a chunk. The primary way to do this by querying off of the sys_id of the Outbound Message.

It is important to note that this is querying off the sys_id of the outbound message itself and not the record that the Outbound Message represents. Additionally we share out the records in a way to preserve sequencing on a single queue, this method does not honor that sequencing. So we would recommend this if you are Bulk Sharing a large set of data and are not concerned about the order they arrive in.

Segregated Processing for a Group of Queues

This refers to creating multiple jobs to each handle certain queues. If you are Sharing data to a large number of queues than this is the path you should lean towards.

This is set up the Sharing to divide the work for your Outbound Table into groupings based on the queue they are writing to. Since the queues are processed iteratively this is changing it from 1 job processing all queues to X jobs processing their own subset of queues.

This will retain the sequencing of the data.


Implementations


Basic Steps

We are going to be copying and modifying the MultiOutput jobs. The default one will run every 30 seconds and does not pass in any encoded query. To do this create a copy of the default one, re-name it appropriately, and pass in the encoded query.

This is the basic format:

Example MultiOutput Job

For a quick example you can go to your Outbound Table, create a filter, and chose the “Copy Query” option. This will give you the encoded query to use.

How to generate the encoded query

Bulk Processing

In order to create this you can create a filter on your current outbound messages based on the sys_id starts with X flag. The sys_ids for these records start with (0, 1, 2, 3, 4, 5, 6, 7, 8, 9, a, b, c, d, e, f) for 16 values. We want to make sure we capture each of these distinctly.

Here is an example where we break these into groups of 4.


Breaking the Job into 4 Groups

Example MultiOutput Job for sys_id grouping

You would then create the other 3 jobs similarly to the one above. You may also want to limit this to a distinct queue by passing in a target queue into the encoded query.

Here is an example of the script

try {
    var encodedQuery="sys_idSTARTSWITH0^ORsys_idSTARTSWITH1^ORsys_idSTARTSWITH2^ORsys_idSTARTSWITH3";
 
    var psp = new Perspectium();
    psp.processMultiOutput(encodedQuery);
}
catch(e) {
    var logger = new PerspectiumLogger();
    logger.logError("error = " + e, "Perspectium MultiOutput Processing");
}

Queue Grouping

In order to create this you can create a filter on your current outbound messages for Target Queue is queue 1 OR Target Queue is queue 2 OR Target Queue is queue 3 and copy the encoded query.

You will then pass this encoded query into the job.

It should resemble the query below containing the sys_id of the target queues selected:

u_target_queue=XXXX^ORu_target_queue=YYYY^ORu_target_queue=ZZZZ

Default Queue

When processing this information it is also important to note the role that the default queue plays (messages without a target queue). Messages for here are primarily for reporting data (counts, heartbeats, etc.).

If you are creating jobs for each target queue, we recommend you also have a job for just the default queue. You can create a job with the following encoded query to account for these.

u_target_queueISEMPTY

Warnings


This is a advanced capability for the Replicator so we recommend running this through your test environments first.

It is important to know that the purpose of this is to send the messages with multiple jobs, without, any overlap in data transit.

Original Job

The original job Perspectium MultiOutput Processing will go through each queue without any encoded query within it. If you do go down this path you should either modify or de-activate this job to make sure your jobs are each processing their own subset of data.

You may also want to place a “X” at the start of the name of this so it is XPerspectium MultiOutput Processing to avoid it being auto restarted from the “Start All Jobs”. You will also want to make sure that following Perspectium Update Set Updates you are maintaining these jobs.

Dot Walking

For an optimization standpoint we also do not recommend “dot-walking” with the queries for this. I.E. do not pass in an encoded query like:

var encodedQuery = "u_target_queue.u_nameLIKEdev18450"
var psp = new Perspectium();
psp.processMultiOutput(encodedQuery);

This will work, however, with higher volumes it will not be as efficient as directly passing in the sys_id of the target queue.

Overloaded Scheduler

A ServiceNow production instance will generally have 4 nodes which can execute 8 jobs each, for a total of 32 available workers. A Bulk Share is a job, a single MultiOutput processing is a job.

So you can create a job per queue. However, it is important to take into account the total available workers on your instance. I.E. you should not create 16 individual MultiOutput processing jobs on a 4 node instance, because than we may be taking 16 of the 32 available workers.

This allows you to ramp up your processing, just take into account the environment of the instance so we do not hog the processing.

replicator_snc_multiple_multioutput_jobs.txt · Last modified: 2019/10/29 19:05 by paul