User Tools

Site Tools


replicator_helpful_tips

Helpful Tips

Here you will find some tips and tricks to help in using the Perspectium Replicator application in your instance. For any questions, please contact support@perspectium.com

Duplicate entries in a table record's activity log

If you have a table such as incident that has an activity log showing sys_journal_field entries and you're seeing duplicate entries, try turning off “Run business rules” and “Refresh history set” on the incident table Replicator subscribe configuration.

Since you already are doing “Refresh history set” on the sys_journal_field table itself, running refresh history set on the incident table will cause it to refresh twice and because of the way ServiceNow handles timing and refreshing history set, it can cause two entries to occur.

As well, running business rules can cause duplicate entries to be created as well since a business rule may cause the system to believe two different events are occurring and thus two duplicate entries.

However turning off “Run business rules” will disable the table's events business rule (for example the “incident events” business rule for the incident table) that is run to create events in the event log.

If you want these events to occur, you can execute them in the Before subscribe script of the table's subscribe configuration using the ServiceNow function that actually fires the event (gs.eventQueue()) and the gr_before and repl_gr GlideRecord objects available in Before subscribe script.

For example, to have the “incident.assigned.to.group” event fired when the assignment group is changed, you would have the following script in Before subscribe script:

if (gr_before.assignment_group != repl_gr.assignment_group) {
  gs.eventQueue("incident.assigned.to.group", current, current.assignment_group, previous.assignment_group.getDisplayValue());
}

Note that for fields that are not stored in the table itself and are stored in the sys_journal_field table, such as the comments field in the incident table, you would want to have the script in the Before subscribe script of the sys_journal_field configuration as follows:

if(repl_gr.element == "comments" ){
    var igr = new GlideRecord("incident");
    if(igr.get(repl_gr.element_id)){
        gs.eventQueue("incident.commented", igr, gs.getUserID(), gs.getUserName());
    }
}

In this case, the incoming sys_journal_field record is checked to verify it is a comments record and that is for the incident table by checking if the sys_id in the record exists in the incident table. If it does, then the “incident.commented” event is fired using the incident record itself (igr) to ensure the event is properly created.

Replicating Class Name Changes between ServiceNow Instances

v3.2.9 patch1
For replicating between ServiceNow instances, changes to the class name are supported so that subscribing instances will also update the record's class name. This is most useful with configuration items (cmdb_ci) where discovery runs and changes the class name of configuration items (because they were created with the wrong class name) and you want these changes to replicate properly.

For example, if you have a network gear (cmdb_ci_netgear) item:
And you change the class to a different class such as IP Switch (cmdb_ci_ip_switch), Replicator will send out a cmdb_ci_ip_switch record and the subscribing instance will notice the change in class and update it appropriately.

The subscribing instance will need to be subscribing to all tables that the class name can be changed to (in the above example the cmdb_ci_ip_switch table as if you were only subscribed to the cmdb_ci_netgear table Replicator would “skip” the cmdb_ci_ip_switch update).

It is recommended you subscribe to global or the base table (such as cmdb_ci) in order for class name changes to replicate properly.

Triggering Replication from an Import Set or Script

There are occasions when Dynamic Share on a table is not getting triggered because the table modification is performed by a script or Import Set Transform Map that is stopping subsequent business rules from running. In the latter case, the Run Business Rule checkbox may have been unselected. In the other case, the setWorkflow(false) API may have been called.

In either case, you can trigger Replicator directly by inserting the following code snippet in your script at the right position. Note a Dynamic Share configuration for the table will have to be created.

var psp = new PerspectiumReplicator();
psp.shareRecord(GR, "table_name", "operation");
  • GR - the current GlideRecord
  • “table_name” - the table name of the GlideRecord
  • “operation” - the mode you want to trigger replication, options are “insert”, “update”, or “delete”

Dot walking field values to be replicated

On occasion there is a need to access a fields value using ServiceNow's http://wiki.servicenow.com/index.php?title=Dot-Walking feature.

The following example leverages this feature by sending the display value of a sys_domain's name value, subscribing it to update another instance of Servicenow.

Place the following line in the Before Share Script of your Bulk Share's or Dynamic Shares

current.sys_domain = current.sys_domain.name;

This will modify the outgoing record's payload to have the domain's name in the sys_domain column.

If you are Subscribing this into another ServiceNow instance you would have to handle it either with a Transform Map, or, you could set up a similar Before Subscribe Script to mirror this by grabbing the Domain corresponding to this name.

var domainVal = repl_gr.sys_domain;
if(domainVal != null){
	var dGR = new GlideRecord('sys_user_group');
	dGR.addQuery('name', domainVal);
	dGR.queryNoDomain();
	if(dGR.next()){
		current.sys_domain = dgr.sys_id;
	}
}

Adjusting date time for local time zones before replication

Date/Time fields in ServiceNow are stored in the database in UTC timezone. They are adjusted for the individual user’s local timezone as defined by their profile at runtime in the UI. This allows anyone viewing the data to see date/time values in their local timezone to avoid confusion. When we replicate that data we just replicate it as is in UTC, and write it to the target without doing any kind of timezone offset since there isn’t one in the context of a machine integration. Typically reporting solutions can account for this and adjust based on your end user’s needs.

This is fairly standard across most enterprise applications.

If you want to explicitly convert all data to a specific timezone for replication you can use a “BeforeShare Script” in bulk shares and dynamic shares to do this. We don’t recommend it, as it can cause issues if the reporting or viewing technology being uses then adjusts it again in their UI. You also need to consider the impact of Daylight Savings. Something converted and replicated during Standard Time, could be off by an hour compared to something converted during Daylight Savings time.

A simple example script to do this here shows converting sys_updated_on and opened_at to US/Eastern timezone during replication.

// Date/Time variables you want to update
var timesToUpdate = ["opened_at", "sys_updated_on"];
var curTimeZone = "America/New_York";

// Get the specified timezone
var tz = Packages.java.util.TimeZone.getTimeZone(curTimeZone);

// Edit specified variables with the offset
var time;
var timeZoneOffset;
for(var t in timesToUpdate){
	time = new GlideDateTime(current.getValue(timesToUpdate[t]));
	time.setTZ(tz);
	timeZoneOffset = time.getTZOffset();
	time.setNumericValue(time.getNumericValue() + timeZoneOffset);
	current.setValue(timesToUpdate[t], time);
}

You would place this in the BeforeShare Script section for any shares where you need it, and specify those fields you want to convert. Here is some information on Before Share Script.

Ignore or Cancel Share

v3.6.0

In the Before Share Script of a Dynamic or Bulk share configuration, you can set the global variable ignore to the boolean value true to prevent the current record from being shared.

For example, the following script ignores the Dynamic sharing of an incident record when the priority field value is 1:

if (current.priority == 1) {
    ignore = true;
}

As another example, the following script will ignore sharing the record with a number value TKT0010001 during Bulk sharing of all ticket records:

if (current.number == "TKT0010001") {
    ignore = true;
}

Ignoring a share if only one field has changed

For cases where you have a table's records updated frequently but data doesn't actually change (such as a table that gets updated every single day via another integration or ServiceNow Discovery), you may not want the table's dynamic share (with interactive only not selected) to run and share out any records.

For example, say the field that gets updated every day is u_last_discovered_date. The rest of the fields don't usually change, and you don't want to share these records out again since the subscribing side (such as a database) doesn't really need the latest u_last_discovered_date.

In these cases, you can run the following script to ignore sharing the record:

function listChangedFields(obj){
	var flds = [];
	var aud = new GlideRecord("sys_audit");
	aud.addQuery("documentkey", obj.sys_id);
	aud.addQuery("record_checkpoint", obj.sys_mod_count);
	aud.query();
	while (aud.next()){
		flds.push(aud.getValue("fieldname"));
	}
	return flds;
}

var changedFields = listChangedFields(current);
var ignoreFields = ["priority", "urgency"]; // If any changed field falls outside that list, the update will be sent

ignore = true;

var util = new ArrayUtil();
for (var i=0; i<changedFields.length; i++){
	if (!util.contains(ignoreFields, changedFields[i])) ignore = false;
}

Ignoring a share with multiple field changes

In v3.22.0 users can activate the checkbox “Select column updates to ignore” to ignore sharing records with multiple field changes. To begin, click the checkbox to see the related list which will allow you to select the fields.

Next you can select the fields that you want to be ignored when updated. Using the picture above as an example, the record would be ignored if the Description and Name fields are the ONLY fields that have been updated; if any other fields have also been updated the record will not be ignored.

Sharing on specific field changes

In version B users can activate the checkbox “Select column updates to share on” to share a record only when one of any number of chosen fields are updated. To begin, click the checkbox to see the related list which will allow you to select the fields.

Note that clicking either “Select column updates to share on” or “Select column updates to ignore” will hide the other checkbox. Only one option can be selected.

Next you can select the fields that you want to trigger a share. Using the picture above as an example, the record would ONLY be shared if the Assigned To or Description fields have been updated; if these fields have not been updated the record will be ignored.

Sharing Out HTML Fields

For tables that have HTML fields, such as the Knowledge (kb_knowledge) table and its text field, use the encrypted_multibyte encryption mode to ensure the HTML fields are sent out properly.

Otherwise, HTML fields may be sent with extraneous characters for spaces as show below:



By default, ServiceNow instances and the Replicator agent support the various encryption modes out of the box so there is no additional configuration required on the subscribing side.

Multiple MultiOutput Jobs

Overview

Your Outbound Messages are sent out by a single job Perspectium MultiOutput Processing which goes to your Outbound Messages table and sends the messages out per queue. This should cover most cases.

However, if you are doing a high volume of messages to a single queue or spreading your messages across a high volume of queues than you can take advantage of the following feature.

The core concept behind this is the ability to pass in an encoded query to the MultiOutput job to limit the scope of these jobs. In other words you can have multiple jobs responsible for their own unique subset of Outbound Messages.

Note: we do not recommend for you to simply clone the default Perspectium MultiOutput Processing without making the following changes. Doing so can cause you to send the same set of messages out multiple times.


First Steps


This is a feature introduced in V3.22.0. If you are interested in this feature it is required that you upgrade to this version.

We also recommend that you take a quick look at the Perspectium MultiOutput Processing job to familiarize yourself with it and contact support@perspectium.com to validate your work if necessary.


Strategies


There are two main strategies behind this process. The one you use will depend on your use. The details for the implementation for each are covered in the following section.

Bulk Processing on a Queue

This refers to wanting to process a high volume of messages on a specific queue. If you are Bulk Sharing millions of messages for a single queue than this is the path you should lean towards.

This is to set up the Sharing to divide the work for a queue into smaller distinct chunks - and having multiple jobs each process a chunk. The primary way to do this by querying off of the sys_id of the Outbound Message.

It is important to note that this is querying off the sys_id of the outbound message itself and not the record that the Outbound Message represents. Additionally we share out the records in a way to preserve sequencing on a single queue, this method does not honor that sequencing. So we would recommend this if you are Bulk Sharing a large set of data and are not concerned about the order they arrive in.

Segregated Processing for a Group of Queues

This refers to creating multiple jobs to each handle certain queues. If you are Sharing data to a large number of queues than this is the path you should lean towards.

This is set up the Sharing to divide the work for your Outbound Table into groupings based on the queue they are writing to. Since the queues are processed iteratively this is changing it from 1 job processing all queues to X jobs processing their own subset of queues.

This will retain the sequencing of the data.


Implementations


Basic Steps

We are going to be copying and modifying the MultiOutput jobs. The default one will run every 30 seconds and does not pass in any encoded query. To do this create a copy of the default one, re-name it appropriately, and pass in the encoded query.

This is the basic format:

Example MultiOutput Job

For a quick example you can go to your Outbound Table, create a filter, and chose the “Copy Query” option. This will give you the encoded query to use.

How to generate the encoded query

Bulk Processing

In order to create this you can create a filter on your current outbound messages based on the sys_id starts with X flag. The sys_ids for these records start with (0, 1, 2, 3, 4, 5, 6, 7, 8, 9, a, b, c, d, e, f) for 16 values. We want to make sure we capture each of these distinctly.

Here is an example where we break these into groups of 4.


Breaking the Job into 4 Groups

Example MultiOutput Job for sys_id grouping

You would then create the other 3 jobs similarly to the one above. You may also want to limit this to a distinct queue by passing in a target queue into the encoded query.

Here is an example of the script

try {
    var encodedQuery="sys_idSTARTSWITH0^ORsys_idSTARTSWITH1^ORsys_idSTARTSWITH2^ORsys_idSTARTSWITH3";
 
    var psp = new Perspectium();
    psp.processMultiOutput(encodedQuery);
}
catch(e) {
    var logger = new PerspectiumLogger();
    logger.logError("error = " + e, "Perspectium MultiOutput Processing");
}

Queue Grouping

In order to create this you can create a filter on your current outbound messages for Target Queue is queue 1 OR Target Queue is queue 2 OR Target Queue is queue 3 and copy the encoded query.

You will then pass this encoded query into the job.

It should resemble the query below containing the sys_id of the target queues selected:

u_target_queue=XXXX^ORu_target_queue=YYYY^ORu_target_queue=ZZZZ

Default Queue

When processing this information it is also important to note the role that the default queue plays (messages without a target queue). Messages for here are primarily for reporting data (counts, heartbeats, etc.).

If you are creating jobs for each target queue, we recommend you also have a job for just the default queue. You can create a job with the following encoded query to account for these.

u_target_queueISEMPTY

Warnings


This is a advanced capability for the Replicator so we recommend running this through your test environments first.

It is important to know that the purpose of this is to send the messages with multiple jobs, without, any overlap in data transit.

Original Job

The original job Perspectium MultiOutput Processing will go through each queue without any encoded query within it. If you do go down this path you should either modify or de-activate this job to make sure your jobs are each processing their own subset of data.

You may also want to place a “X” at the start of the name of this so it is XPerspectium MultiOutput Processing to avoid it being auto restarted from the “Start All Jobs”. You will also want to make sure that following Perspectium Update Set Updates you are maintaining these jobs.

Dot Walking

For an optimization standpoint we also do not recommend “dot-walking” with the queries for this. I.E. do not pass in an encoded query like:

var encodedQuery = "u_target_queue.u_nameLIKEdev18450"
var psp = new Perspectium();
psp.processMultiOutput(encodedQuery);

This will work, however, with higher volumes it will not be as efficient as directly passing in the sys_id of the target queue.

Overloaded Scheduler

A ServiceNow production instance will generally have 4 nodes which can execute 8 jobs each, for a total of 32 available workers. A Bulk Share is a job, a single MultiOutput processing is a job.

So you can create a job per queue. However, it is important to take into account the total available workers on your instance. I.E. you should not create 16 individual MultiOutput processing jobs on a 4 node instance, because than we may be taking 16 of the 32 available workers.

This allows you to ramp up your processing, just take into account the environment of the instance so we do not hog the processing.

ServiceNow Application Tour

If you want to interactively walk through major components of the replicator, use the ServiceNow Tour feature.

To take a tour of a feature of the replicator, click on the question-mark icon in the top-right corner.


A sidebar should pop up. Click on “Take a Tour” in the bottom-right corner to take the tour.


To navigate through the tour click the “next” button in the bubble. To end the tour click the “x” in the top-right corner of the bubble.

The following pages have the tour feature: bulk share list, bulk share, dynamic share list, dynamic share, inbound messages list, log message list, outbound message list, performance stats, queues list, scheduled bulk share list, scheduled bulk share, script include list, script include, subscribe list, subscribe, table compare, table map list, table map.

replicator_helpful_tips.txt · Last modified: 2017/08/31 16:26 by brandon.tran