User Tools

Site Tools


replicator_beginners_guide

Replicator

After installing the Perspectium Update Set, there will be a section in the Perspectium application for Replicator as described below.

Properties

The Properties page for the Replicator application can be found in ServiceNow by navigating to Perspectium > Replicator > Properties.

Notable Replicator properties found on this page include:

Property Example value
Encryption key for encrypting shared content from replicator (must be at least 24 characters long for AES-128 encryption

or at least 32 characters long for AES-256 encryption).
The cow jumped over the moon (AES-128)
Decryption key used to decrypt replicated content that is subscribed (must be at least 24 characters long for AES-128 encryption

or at least 32 characters long for AES-256 encryption).
The cow jumped over the moon and the sun (AES-256)
2015/02/10 22:02 · David Loo

Use Cases

The following use cases show examples of how to configure replicator in the ServiceNow application.

Replicating between ServiceNow Instances

This example shows how to keep ServiceNow sub-prods up to date in real time with production. First, install the Perspectium update set on both the source and target instances of ServiceNow. After acquiring and downloading the update set, follow the instructions here.

Subscribing

Configure the consumer instance to subscribe to the data being shared.

On the target or sub-production (consumer) instance, configure for Subscribe.

By default, Perspectium support will have already setup default routing between your instances. You may optionally configure Subscribed Queues to receive from specific queues that the source or production has setup Shared Queues for.

Sharing

Now it's time to configure your publisher to share the data.

On the source or production instance, configure for Dynamic Sharing or Bulk Share.

Dynamic Sharing

On the source instance, navigate to the Perspectium application and select the Dynamic Share module and select an existing table to share or select New on the top of the list to create a new entry.

For an explanation of the Dynamic Share configuration, go here.

Bulk Sharing

On the source instance, navigate to the Perspectium application and select the Bulk Share module and select New to create a new entry.

For an explanation of the Bulk Share configuration, go here.

For bulk sharing to another ServiceNow instance use where you want to share all child records of a parent table such as task or cmdb_ci, select the Share child class only when executing the bulk share of the parent table. Sending the child record is all that is needed for the subscribing instance to properly recreate the record's hierarchy from the child back up to the parent.


Replicating from a ServiceNow Instance to a Local Database


This example shows how to keep a database up to date in real time with your ServiceNow instance. First, install the Perspectium update set on the source instance of ServiceNow. After acquiring and downloading the update set, follow the instructions here for installation.

Upon successful installation of the update set, please install the agent. For details on how to download and install the agent for Linux, follow the Linux Installation process. For installation on Windows Systems, please follow the Windows Installation process.

For details on how to set up the agent, please see the Agent Configuration page.

Subscribing

By default, the agent is set up to subscribe to all tables from the ServiceNow Instance targeted to the queue it is subscribing. No additional configuration is necessary if you want to use the ServiceNow application to control what the agent is receiving via targeted Shared Queues or by requesting Perspectium to set up default routing to it.

Sharing

Now it's time to configure your publisher to share the data.

On the source or production instance, configure for Dynamic Sharing or Bulk Share.

Dynamic Sharing

On the source instance, navigate to the Perspectium application and select the Dynamic Share module and select an existing table to share or select New on the top of the list to create a new entry.

For an explanation of the Dynamic Share configuration, go here.

Bulk Sharing

On the source instance, navigate to the Perspectium application and select the Bulk Share module and select New to create a new entry.

For an explanation of the Bulk Share configuration, go here.

Replicating to an Import Set

Replicating to an Import Set

Replicating to an Import Set allows you to transform records from one source table on the sharing instance to a different table on the subscribing instance. With this you can bridge two different tables, manipulate the data, or remedy different schemas across your instances.

In this example, we will transform incoming ticket table records to replicate as records in the incident table.

Create Replicator Share Configuration

Create a replicator share configuration for the source table on the sharing instance if one has not been created yet (in this case a share configuration will be created for the ticket table). Configure this share as you desire with the one exception being to select the “Update or Insert” option instead of selecting the “Create” and “Update” options each separately:

This is a technical detail where we want to make sure messages are routed to the Import Set table appropriately on the destination side.

Create Import Set Table

Import Set Table Example
See Details Below

Table Setup

Create an import set table on the subscribing instance which will transform records from the incoming source table to the destination table (in this case a table called “u_incident_import” that will transform incoming ticket records to incident records). This table is where incoming data will be staged before being transformed into your final destination table.

This table should have “Import Set Row” as the value for the “Extends table” option and you should define columns that will be transformed from the incoming source table to the destination table.

Table Columns

For example, in this case we want values in the “Short Description” and “Assigned To” fields to come over from the ticket table to the incident table. This can be seen in the picture above. You will want to create columns within this table for any columns you want mapped over.

Below is an example of the resulting column names for each table:

Sharing Instance Import Set Table Destination Table
short_description u_short_description short_description
assigned_to u_assigned_to assigned_to
sys_id u_sys_id sys_id
sys_updated_by u_sys_updated_by sys_updated_by
u_custom_field u_custom_field u_custom_field

You should use the appropriate Column Types for each (e.g., Reference to sys_user for u_assigned_to, Date/Time for u_opened_at). The column names for the Import Set Table are described in the Subscribe Field Mapping section below.

Handling sys_id and sys_updated_by

You will want to create two custom columns in the import set table for the sys_id and sys_updated_by fields. This will ensure the values coalesce correctly and the user updates are tracked correctly. Additionally having these mapped over correctly is necessary for related tables (such as sys_journal_field) will show up properly.

You should name the fields in the Import Set as “u_sys_id” and “u_sys_updated_by” as show above. The only requirements are the sys_id field should be a String type of length 32.

Create Replicator Subscribe Configuration

Subscribe Example
See Details Below

Subscribe

Create a new replicator subscribe configuration for the newly created import set table. For this you will want to:

  1. Select the table name to be the Import Set Table as this is the destination table for staging, in our example u_incident_import.
  2. In Trigger Conditions you will want to place in the original source table into the source table name, in our example ticket.
  3. In Trigger Conditions you will want to have the field prefix as u_ this to handle for ServiceNow prefixes. However, when replicating custom fields, you may want to leave the Field Prefix blank to avoid the custom field being replicated as u_u_custom_field.
  4. You will want to have Run Business Rules as true to automatically Transform the data once it hits the Import Set Table.
  5. You will want to have Override System fields as true.
  6. Depending on your use case you will likely want Copy Empty Fields as true.
Subscribe Field Mapping

In our example we had the column names:

Sharing Instance Import Set Table
short_description u_short_description
assigned_to u_assigned_to
sys_id u_sys_id
sys_updated_by u_sys_updated_by
u_custom_field u_custom_field

By naming them like so and creating the Subscribe configuration correctly the values will be properly mapped over. That is the incoming Ticket record for short_description will look for the Import Set Table u_short_description. Then the incoming Ticket record for u_custom_field will look for the Import Set Table u_custom_field.

If you do not, or cannot, name your columns appropriately you can take advantage of the Before Subscribe Script portion to manually assign the values.

Create Transform Map

In the navigator type “Transform Map” and open up the Transform Map table. These mappings are what transform the data from the Import Set Table (u_incident_import) into the final destination table (incident).

Base Transform Map

Create a base transform map and name it appropriately. Choose the source table as the Import Set Table, in our example u_incident_import, and the target table as the destination table, in our example incident.

You will also want to mark the following as true:

  • Active
  • Run Business Rules
    • Use this if you want to run Business Rules (for validations, notifications, SLA)
  • Copy Empty Fields
    • Use this if you want to be able to set values to empty
Coalescing

We cover coalescing strategies in greater detail here: Coalescing Strategies

This is where you will determine the logic for whether the incoming logic will insert or update a record. By default our out-of-box Common Document Format accounts for this. However, if you are creating an Import Set / Transform Map from scratch you will need some form of this coalescing.

Impersonation

We cover impersonation strategies in greater detail here: Impersonating In A Transform Map

This is where you can adjust what user is making the updates. You may have noticed that your record's are being updated as the “system” user otherwise. This is mainly for auditing and comment / activity log purposes.

Field Mappings

Under Related Link choose the “Auto map matching fields” option. Once selected, this option will automatically map fields as you created in the import set table to fields as they relate in the destination table. You will want to double check that the entry for sys_id has Coalesce as true. This is what ServiceNow uses to determine whether to insert or update.

This is all you will need for a standard replication scenario! Below is some more information for extra scenarios and troubleshooting.

Extra

Preventing Looping

Starting in the Argon release of the Update Set Perspectium now will add a parameter to the records which are being updated via Subscribing. The Dynamic Shares will then skip replicating these records. This is to prevent two instances which are bi-directionally replicating to each other from getting into a loop of receiving an update and immediately firing an update back.

This is the default behavior for replicating the data table to table. However, when replicating to an import set table you can perform this same step to get the same affect.

In an onBefore Transform Script issue the following statement:

(function runTransformScript(source, map, log, target /*undefined onStart*/ ) {
	target.psp_subscribed_record = true;
})(source, map, log, target);

The Dynamic Shares will skip processing this record. You do not need to create a column “psp_subscribed_record” on your target table, it is really just a flag for the Dynamic Share to utilize. This parameter will just be thrown away by ServiceNow at the end of the transaction.

Modifying Data

In our example we are mapping Ticket records to the Incident table. With the current configuration it is all mapped one-to-one. This script is executed within the Field Map script (of number), but, you can get the affect by doing something similar in the Before Subscribe Script.

This script is stripping off the TKT prefix of the ticket number and replacing it with the INC prefix for the incident number.

Replicating Reference Fields

When replicating from another ServiceNow instance and you are mapping a field that is a reference field, you may want to choose the Choice Action of “ignore” and use script to set the field:

target.problem_id = source.u_problem_id;

In this example, you are mapping the u_problem_id field of your import set table (source) to the problem_id field on the incident table (target). Because import set tables are meant for reading from external sources, ServiceNow has an issue when reading data from another ServiceNow instance into an import set table, a duplicate referenced record (problem) may be created if we don't do the above. See Transform Map Scripts for more information on using transform map scripts and referencing source and target.

Replicating Complementary Records

If the destination table has sys_audit and sys_journal_field records you want to be reflected accurately (such as for the incident table with work notes and comments), we want to modify their Replicator subscribe configurations to work with our import set table. For each we'll want to add Before Subscribe Script entries as follows:

sys_journal_field

var jgr = new GlideRecord(current.name);
if (jgr.get(current.element_id)) {
  var psp = new Perspectium();
  psp.refreshHistorySet(jgr);
}

sys_audit

var jgr = new GlideRecord(current.tablename);
if (jgr.get(current.documentkey)) {
  var psp = new Perspectium();
  psp.refreshHistorySet(jgr);
}

And for the sys_audit subscribe configuration, we'll want to add the condition so that it only runs when the received message is for the destination table (incident) and it's a work note or comment:

Doing the above ensures comments and work notes will be reflected accurately in the destination table to avoid duplicate entries or other issues.

Troubleshooting

sys_journal_field and sys_audit

If journal field/audit records are not being reflected correctly when replicating into your Import Set Table (for example such as work notes or comments not showing in an incident record's history), change the sys_audit subscribe configuration's before subscribe script from the above to the following:

 
var jgr = new GlideRecord(current.tablename);
if (jgr.get(current.documentkey)) {
  jgr.setForceUpdate(true); 
  jgr.update();
}

In some cases refreshing history set may not work correctly and using this script instead will force the record to update properly to show journal field/audit records in the table's record.

Coalescing Failing

Within ServiceNow sys_id coalescing will fail under certain unknown circumstances for updating. ServiceNow has identified this in PRB639910. The result of this is that the insert will work properly, however, on a “update” it will sometimes try to insert the record again instead of updating. Thus it will fail.

As a workaround you can apply the following script. In the Base Transform Map record hit the “Run Script” checkbox and paste in the following code:

(function transformRow(source, target, map, log, isUpdate) {

	/*
	 *  Manually coalesce ourselves to determine if the record exists, and if so manually
	 *  perform the update.  If the record does not exist run transform as normal for insert.
	 *  See: PRB639910
	 */
	
	var gr = new GlideRecord(target.getTableName());
	gr.addQuery('sys_id', source.u_sys_id);
	gr.queryNoDomain();
	if(gr.next()){
		ignore = true;
		for(var f in target){
			gr[f]= target[f];
		}
		gr.update();
	}
	else{
		//Insert as normal
	}

})(source, target, map, log, action==="update");

This will essentially perform the coalescing manually. If the record does not exist it will perform the standard Transform Map insert. If the record does exist it will manually perform the update. It will also copy over any thing you have done within the onStart, onBefore, or Field Mapping portions of the Transform Map.

2015/10/13 13:36 · Paul Nguyen

One To Many Replication

Standard Use Case

The standard replication method that we provide is one to one. You will define a path (Shared Queue) and a producer (Sharer) will generate one message on that path, to one destination. If you are Sharing from one source and intend to send the same data to several destinations (data warehouses, dev instances, test instances) you may have noticed that you will have to create as many duplicate share configurations.

For simple Sharing to two or three destinations this is very manageable, but if you plan on sharing out towards several destinations it can be cumbersome.

Fanouts

We have the capability to perform this one to many replication which we refer to as a fanout. You will send your data once to a single shared queue and we will handle sending it out to several queues to be read out. It will be as straightforward as:

Share To Subscribe To
psp.out.fanout.company psp.out.replicator.company.data1
psp.out.replicator.company.data2
psp.out.replicator.company.data3
psp.out.servicenow.company.dev
psp.out.servicenow.company.test

How To Set it Up

We are currently planning on implementing a customer handled approach to this. In the meantime however, its setup is handled internally. Feel free to contact us at support@perspectium.com in order for us to know your requirements and how we can best create your desired structure.

We also have a general naming convention for this scheme but the queue names can be modified to how you want.

replicator_snc_multiple_subscriber_jobs

replicator_beginners_guide.txt · Last modified: 2019/01/02 12:12 by timothy.pike