For the cases where replication is occurring slowly between your ServiceNow instances, you can do the following to tune and optimize your instances.
ServiceNow's HTTP Client Connection Management limits the number of connections the Perspectium application can make and thus can be a factor in the performance of replication. It is recommended you adjust the HTTP Client Connection Management properties to higher values as shown below to allow more connections to the Perspectium Message Broker Service.
|Property||Default Value||Suggested Value|
Note if the properties do not exist, see Adding a Property as ServiceNow will use the default values if the properties do not exist in an instance.
The Perspectium Outbound and Inbound message queues in ServiceNow in particular requires indexes to be added in order for Replicator to function optimally. The following table documents all the tables and their indexes that need to be created by an administrator of your ServiceNow instance. Please follow the index creation guidelines in the following ServiceNow wiki article particular to your version of ServiceNow:Creating Indexes in ServiceNow
As of v3.19.0, these indexes are included as part of the update set. However, these indexes will only install properly in Helsinki+ versions of ServiceNow and will show commit errors in older versions. These commit errors can be ignored as the rest of the update set commits will have installed properly.
In Bismuth, indexes have been added to the PSP Attachment Out Message (u_psp_attachment_out_message) and PSP Audit Out Message (u_psp_audit_out_message) tables.
|PSP Out Message tables (psp_out_message, u_psp_attachment_out_message, u_psp_audit_out_message, u_psp_observer_out_message)||psp_out_query||composite, non-unique|| State (state)
Target Queue (u_target_queue)
|PSP Out Message tables (psp_out_message, u_psp_attachment_out_message, u_psp_audit_out_message, u_psp_observer_out_message)||psp_out_query3||single, non-unique||Sequence (u_sequence)|
|PSP Out Message tables (psp_out_message, u_psp_attachment_out_message, u_psp_audit_out_message, u_psp_observer_out_message)||psp_out_query4||composite, non-unique|| Created (sys_created_on)
|PSP In Message (psp_in_message)||psp_in_query||single, non-unique||Created (sys_created_on), name, key, u_sequence|
|PSP In Message (psp_in_message)||psp_in_query2||single, non-unique||State (state)|
|PSP Log Message (u_psp_log_message)||psp_log_query||single, non-unique||Created (sys_created_on)|
|PSP Log Message (u_psp_log_message)||psp_log_query2||composite, non-unique|| Created (sys_created_on)
|PSP Properties (u_psp_properties)||u_psp_properties_u_name||single, non-unique||Name (u_name)|
|PSP Replicate Conf (u_psp_replicate_conf)||u_psp_replicate_conf||single, non-unique||active, sync_direction, table_name|
If you receive notification from Support that messages are starting to accumulate on the Perspectium Message Broker Service (MBS), you can add more Perspectium Replicator Subscriber jobs on the subscribing instance that will help improve the performance with reading messages.
Go into Perspectium > Control and Configuration > All Scheduled Jobs and look for the Perspectium Replicator Subscriber scheduled job:
Create another one of these jobs, naming it something such as Perspectium Replicator Subscriber 2 to easily identify it as a second version of the same job. The easiest way to do this is to click into the Perspectium Replicator Subscriber scheduled job, change the Name field to the new name and then right-click along the top toolbar where it says “Scheduled Script Execution” and choose the “Insert” option:
This will create a duplicate of the Perspectium Replicator Subscriber job and you will now have two jobs running to get replication messages from MBS (by default each scheduled job is set to run every 30 seconds but you can adjust these jobs to run at a quicker interval such as 10 seconds in the above screenshot). You can add more jobs as necessary depending on your instance i.e. these jobs will need to compete with other jobs for available workers so if you have too many jobs and not enough nodes/workers, it may take longer to run if you schedule too many.