User Tools

Site Tools


replicator_helpful_tips

Helpful Tips

Dubnium

Here you will find some tips and tricks to help in using the Perspectium Replicator application in your instance. For any questions, please contact support@perspectium.com

Duplicate entries in a table record's activity log

If you have a table such as incident that has an activity log showing sys_journal_field entries and you're seeing duplicate entries, try turning off “Run business rules” and “Refresh history set” on the incident table Replicator subscribe configuration.

Since you already are doing “Refresh history set” on the sys_journal_field table itself, running refresh history set on the incident table will cause it to refresh twice and because of the way ServiceNow handles timing and refreshing history set, it can cause two entries to occur.

As well, running business rules can cause duplicate entries to be created as well since a business rule may cause the system to believe two different events are occurring and thus two duplicate entries.

However turning off “Run business rules” will disable the table's events business rule (for example the “incident events” business rule for the incident table) that is run to create events in the event log.

If you want these events to occur, you can execute them in the Before subscribe script of the table's subscribe configuration using the ServiceNow function that actually fires the event (gs.eventQueue()) and the gr_before and repl_gr GlideRecord objects available in Before subscribe script.

For example, to have the “incident.assigned.to.group” event fired when the assignment group is changed, you would have the following script in Before subscribe script:

if (gr_before.assignment_group != repl_gr.assignment_group) {
  gs.eventQueue("incident.assigned.to.group", current, current.assignment_group, previous.assignment_group.getDisplayValue());
}

Note that for fields that are not stored in the table itself and are stored in the sys_journal_field table, such as the comments field in the incident table, you would want to have the script in the Before subscribe script of the sys_journal_field configuration as follows:

if(repl_gr.element == "comments" ){
    var igr = new GlideRecord("incident");
    if(igr.get(repl_gr.element_id)){
        gs.eventQueue("incident.commented", igr, gs.getUserID(), gs.getUserName());
    }
}

In this case, the incoming sys_journal_field record is checked to verify it is a comments record and that is for the incident table by checking if the sys_id in the record exists in the incident table. If it does, then the “incident.commented” event is fired using the incident record itself (igr) to ensure the event is properly created.

Replicating Class Name Changes between ServiceNow Instances

v3.2.9 patch1
For replicating between ServiceNow instances, changes to the class name are supported so that subscribing instances will also update the record's class name. This is most useful with configuration items (cmdb_ci) where discovery runs and changes the class name of configuration items (because they were created with the wrong class name) and you want these changes to replicate properly.

For example, if you have a network gear (cmdb_ci_netgear) item:
And you change the class to a different class such as IP Switch (cmdb_ci_ip_switch), Replicator will send out a cmdb_ci_ip_switch record and the subscribing instance will notice the change in class and update it appropriately.

The subscribing instance will need to be subscribing to all tables that the class name can be changed to (in the above example the cmdb_ci_ip_switch table as if you were only subscribed to the cmdb_ci_netgear table Replicator would “skip” the cmdb_ci_ip_switch update).

It is recommended you subscribe to global or the base table (such as cmdb_ci) in order for class name changes to replicate properly.

Triggering Replication from an Import Set or Script

There are occasions when Dynamic Share on a table is not getting triggered because the table modification is performed by a script or Import Set Transform Map that is stopping subsequent business rules from running. In the latter case, the Run Business Rule checkbox may have been unselected. In the other case, the setWorkflow(false) API may have been called.

In either case, you can trigger Replicator directly by inserting the following code snippet in your script at the right position. Note a Dynamic Share configuration for the table will have to be created.

var psp = new PerspectiumReplicator();
psp.shareRecord(GR, "table_name", "operation");
  • GR - the current GlideRecord
  • “table_name” - the table name of the GlideRecord
  • “operation” - the mode you want to trigger replication, options are “insert”, “update”, or “delete”

Dot walking field values to be replicated

On occasion there is a need to access a fields value using ServiceNow's http://wiki.servicenow.com/index.php?title=Dot-Walking feature.

The following example leverages this feature by sending the display value of a sys_domain's name value, subscribing it to update another instance of Servicenow.

Place the following line in the Before Share Script of your Bulk Share's or Dynamic Shares

current.sys_domain = current.sys_domain.name;

This will modify the outgoing record's payload to have the domain's name in the sys_domain column.

If you are Subscribing this into another ServiceNow instance you would have to handle it either with a Transform Map, or, you could set up a similar Before Subscribe Script to mirror this by grabbing the Domain corresponding to this name.

var domainVal = repl_gr.sys_domain;
if(domainVal != null){
	var dGR = new GlideRecord('sys_user_group');
	dGR.addQuery('name', domainVal);
	dGR.queryNoDomain();
	if(dGR.next()){
		current.sys_domain = dgr.sys_id;
	}
}

Adjusting date time for local time zones before replication

Date/Time fields in ServiceNow are stored in the database in UTC timezone. They are adjusted for the individual user’s local timezone as defined by their profile at runtime in the UI. This allows anyone viewing the data to see date/time values in their local timezone to avoid confusion. When we replicate that data we just replicate it as is in UTC, and write it to the target without doing any kind of timezone offset since there isn’t one in the context of a machine integration. Typically reporting solutions can account for this and adjust based on your end user’s needs.

This is fairly standard across most enterprise applications.

If you want to explicitly convert all data to a specific timezone for replication you can use a “BeforeShare Script” in bulk shares and dynamic shares to do this. We don’t recommend it, as it can cause issues if the reporting or viewing technology being uses then adjusts it again in their UI. You also need to consider the impact of Daylight Savings. Something converted and replicated during Standard Time, could be off by an hour compared to something converted during Daylight Savings time.

A simple example script to do this here shows converting sys_updated_on and opened_at to US/Eastern timezone during replication.

// Date/Time variables you want to update
var timesToUpdate = ["opened_at", "sys_updated_on"];
var curTimeZone = "America/New_York";

// Get the specified timezone
var tz = Packages.java.util.TimeZone.getTimeZone(curTimeZone);

// Edit specified variables with the offset
var time;
var timeZoneOffset;
for(var t in timesToUpdate){
	time = new GlideDateTime(current.getValue(timesToUpdate[t]));
	time.setTZ(tz);
	timeZoneOffset = time.getTZOffset();
	time.setNumericValue(time.getNumericValue() + timeZoneOffset);
	current.setValue(timesToUpdate[t], time);
}

You would place this in the BeforeShare Script section for any shares where you need it, and specify those fields you want to convert. Here is some information on Before Share Script.

Ignore or Cancel Share

There are 3 ways you can control the Dynamic Share to only fire on certain field updates.

  • Manually in a Before Share Script
  • Only trigger when specified columns have changed
  • Only trigger when columns other than the specified columns have changed

Ignoring or canceling a share in the before share script

v3.6.0

In the Before Share Script of a Dynamic or Bulk share configuration, you can set the global variable ignore to the boolean value true to prevent the current record from being shared.

For example, the following script ignores the Dynamic sharing of an incident record when the priority field value is 1:

if (current.priority == 1) {
    ignore = true;
}

As another example, the following script will ignore sharing the record with a number value TKT0010001 during Bulk sharing of all ticket records:

if (current.number == "TKT0010001") {
    ignore = true;
}

Ignoring a share if only one field has changed

For cases where you have a table's records updated frequently but data doesn't actually change (such as a table that gets updated every single day via another integration or ServiceNow Discovery), you may not want the table's dynamic share (with interactive only not selected) to run and share out any records.

For example, say the field that gets updated every day is u_last_discovered_date. The rest of the fields don't usually change, and you don't want to share these records out again since the subscribing side (such as a database) doesn't really need the latest u_last_discovered_date.

In these cases, you can run the following script to ignore sharing the record:

function listChangedFields(obj){
	var flds = [];
	var aud = new GlideRecord("sys_audit");
	aud.addQuery("documentkey", obj.sys_id);
	aud.addQuery("record_checkpoint", obj.sys_mod_count);
	aud.query();
	while (aud.next()){
		flds.push(aud.getValue("fieldname"));
	}
	return flds;
}

var changedFields = listChangedFields(current);
var ignoreFields = ["priority", "urgency"]; // If any changed field falls outside that list, the update will be sent

ignore = true;

var util = new ArrayUtil();
for (var i=0; i<changedFields.length; i++){
	if (!util.contains(ignoreFields, changedFields[i])) ignore = false;
}

Ignoring a share with multiple field changes

In v3.22.0 users can activate the checkbox “Select column updates to ignore” to ignore sharing records with multiple field changes. To begin, click the checkbox to see the related list which will allow you to select the fields.

Next you can select the fields that you want to be ignored when updated. Using the picture above as an example, the record would be ignored if the number and description fields are the ONLY fields that have been updated; if any other fields have also been updated the record will not be ignored.

Sharing on specific field changes

In Bismuth users can activate the checkbox “Select column updates to share on” to share a record only when one of any number of chosen fields are updated. To begin, click the checkbox to see the related list which will allow you to select the fields.

Note that clicking either “Select column updates to share on” or “Select column updates to ignore” will hide the other checkbox. Only one option can be selected.

Next you can select the fields that you want to trigger a share. Using the picture above as an example, the record would ONLY be shared if the Assigned To or Description fields have been updated; if these fields have not been updated, the record will be ignored.

Sharing Out HTML Fields

For tables that have HTML fields, such as the Knowledge (kb_knowledge) table and its text field, use the encrypted_multibyte encryption mode to ensure the HTML fields are sent out properly.

Otherwise, HTML fields may be sent with extraneous characters for spaces as show below:



By default, ServiceNow instances and the Replicator agent support the various encryption modes out of the box so there is no additional configuration required on the subscribing side.

Multiple MultiOutput Jobs

If you are seeing that you are not sending data out of your instance as fast as you would like it is important to understand all the pieces first:

  • Is the count of my Outbound Messages [psp_out_message] consistently very high (+250k Ready Message)?
  • Is my property for maximum bytes per post too low (should be in the 5 MB to 10 MB range)?
  • Is my property for max record per post too low (should be around 2000-4000 records)?
  • How often is my Perspectium MultiOutput Processing job running (default to 30 seconds)?

These are the typical things we look at for optimizations first prior to adding in Multiple MultiOutput jobs. If these are all set as expected and the throughput is still not enough you can read about multiple jobs here.

ServiceNow Application Tour

If you want to interactively walk through major components of the replicator, use the ServiceNow Tour feature.

To take a tour of a feature of the replicator, click on the question-mark icon in the top-right corner.


A sidebar should pop up. Click on “Take a Tour” in the bottom-right corner to take the tour.


To navigate through the tour click the “next” button in the bubble. To end the tour click the “x” in the top-right corner of the bubble.

The following pages have the tour feature: bulk share list, bulk share, dynamic share list, dynamic share, inbound messages list, log message list, outbound message list, performance stats, queues list, scheduled bulk share list, scheduled bulk share, script include list, script include, subscribe list, subscribe, table compare, table map list, table map.

replicator_helpful_tips.txt · Last modified: 2018/10/24 15:51 by timothy.pike