Documente Academic
Documente Profesional
Documente Cultură
Operations Guide
Document Revision: 1
Publication Date: October 1, 2015
Prepared for: IMS Health
IMS Health
1425 U.S. Highway 206
2015 IMS Health Bedminster, NJ 07921 908.443.2000 DEV-OPSG-US-05
Nucleus 360 Version 5.2.0
Operation Instructions
CONTENTS
1. Publication Record.............................................................................................................................. 21
2. Introduction.......................................................................................................................................... 22
2.1 Purpose and Audience..................................................................................................................... 22
2.2 User Prerequisites............................................................................................................................ 22
2.3 Documentation Conventions............................................................................................................ 22
3. Text Conventions................................................................................................................................. 23
3.1 Running SQL Plus............................................................................................................................ 23
3.2 Accessing Microsoft Windows Command Prompt............................................................................23
4. Nucleus 360 Conventions................................................................................................................... 24
4.1 Environment Variables..................................................................................................................... 24
4.2 Server Layer Directory Structure...................................................................................................... 24
5. Operation Checklist............................................................................................................................. 28
5.1 System Initialization.......................................................................................................................... 28
5.2 Third Party Data Processing Strategy.............................................................................................. 28
5.2.1 Processing Batch Data.............................................................................................................. 28
5.2.2 Processing Real-Time Data....................................................................................................... 32
5.3 Processing Data Files...................................................................................................................... 35
6. Middle Tier Servers.............................................................................................................................. 38
6.1 Overview.......................................................................................................................................... 38
6.2 Common Server............................................................................................................................... 38
6.2.1 Description................................................................................................................................. 38
6.2.2 Prerequisites.............................................................................................................................. 38
6.2.3 Command Line Parameters/Usage............................................................................................ 39
6.3 Customer Resolution Server............................................................................................................ 39
6.3.1 Description................................................................................................................................. 39
6.3.2 Prerequisites.............................................................................................................................. 39
6.3.3 Command Line Parameters/Usage............................................................................................ 40
6.4 Configuration Assistant Server......................................................................................................... 40
6.4.1 Description................................................................................................................................. 40
6.4.2 Prerequisites.............................................................................................................................. 40
6.4.3 Command Line Parameters/Usage............................................................................................ 41
6.5 Practitioner Verification Server......................................................................................................... 41
6.5.1 Description................................................................................................................................. 41
6.5.2 Prerequisites.............................................................................................................................. 42
6.5.3 Command Line Parameters/Usage............................................................................................ 42
6.5.4 General Troubleshooting on a Client Machine...........................................................................42
6.6 Middle Tier Startup and Maintenance...............................................................................................44
6.6.1 Middle Tier System Structure and Directory Layout...................................................................44
6.7 Middle Tier Server Operation........................................................................................................... 45
6.7.1 Running Servers as Windows 2008 Services............................................................................46
6.7.2 Running All Servers as Normal Console Applications on Windows 2008..................................47
6.7.3 Starting a Middle Tier Server Automatically...............................................................................47
6.7.4 Starting a Middle Tier Server Manually......................................................................................48
6.7.5 Using the Orbix Server Manager............................................................................................... 48
6.7.6 Starting a Server Manually via Orbix Server Manager...............................................................50
6.7.7 Viewing a Listing of All Registered Server Executables.............................................................51
6.7.8 Stopping (Killing) a Server Manually.......................................................................................... 51
6.8 Troubleshooting................................................................................................................................ 52
6.8.1 Middle Tier Machine on Windows 2008.....................................................................................52
7. Standardizer Server............................................................................................................................. 54
7.1 Command Line Parameters/Usage.................................................................................................. 54
7.2 Running the Standardizer Daemon from the Windows Command Prompt.......................................55
7.2.1 Launch the Server..................................................................................................................... 55
7.2.2 Shut Down the Server................................................................................................................ 55
7.2.3 Output........................................................................................................................................ 55
7.3 Running the Standardizer Daemon via the ONE Point Application...................................................56
7.4 Checking Status of the Server.......................................................................................................... 56
7.4.1 View the Status.......................................................................................................................... 56
7.5 Troubleshooting................................................................................................................................ 56
7.5.1 Standardizer Daemon Is Not Running/Unable to Reach Daemon.............................................56
7.5.2 Standardizer Daemon Is Already Running/Unable to Launch Daemon.....................................57
7.5.3 Standardizer Daemon Is Already Running but Address Standardization Does Not Happen......57
Note:................................................................................................................................................... 57
8. Web Server........................................................................................................................................... 58
8.1 Overview.......................................................................................................................................... 58
8.2 Customer Maintenance Java Business Objects...............................................................................58
8.2.1 Description................................................................................................................................. 58
8.3 Tomcat Startup and Maintenance..................................................................................................... 58
8.3.1 Tomcat Server Directory Layout................................................................................................. 58
9.13.18 An Unhandled Exception occured. Refer to the dump file for details.
npdataimport.exe_YYYYMMDD_HHMMSS.dmp..............................................................................116
9.13.19 Processing Real-time Exception Messages.........................................................................117
9.13.20 List of Errors Logged by the run_dataimport Script..............................................................119
9.13.21 Exception Handling During OneKey Data Load...................................................................121
9.13.22 Processing Rejected OneKey Profiles.................................................................................121
9.14 Restarting the Data Import Process.............................................................................................128
9.15 Tables Modified by Data Import Process......................................................................................128
10. DQT - Populating the De-normalized Query Tables......................................................................129
10.1 Overview...................................................................................................................................... 129
10.1.1 What DQT Does.................................................................................................................... 129
10.1.2 What DQT Requires.............................................................................................................. 129
10.2 Main Procedure............................................................................................................................ 129
10.2.1 Main Procedure Arguments................................................................................................... 130
10.2.2 Running DQT in Incremental Mode.......................................................................................130
10.2.3 Running DQT in Refresh Mode.............................................................................................. 130
10.2.4 Log File.................................................................................................................................. 130
10.3 ProcessImportedProfiles Function................................................................................................130
10.3.1 ProcessImportedProfiles Function Arguments.......................................................................131
10.3.2 Running DQT in Real-time Mode...........................................................................................131
10.3.3 Log File.................................................................................................................................. 131
10.4 processSingleProfile Procedure................................................................................................... 131
10.4.1 processSingleProfile Procedure Arguments...........................................................................132
10.4.2 Running DQT for a Single Profile........................................................................................... 132
10.4.3 Log File.................................................................................................................................. 132
10.5 Analyze Results............................................................................................................................ 132
10.6 Running DQT Process Using ONE Point Application...................................................................132
10.7 Analyze Table............................................................................................................................... 133
10.8 Troubleshooting............................................................................................................................ 133
10.8.1 identifier must be declared..................................................................................................... 133
10.8.2 The parameters passed to procedure are invalid...................................................................133
10.8.3 When any database package is invalid =>............................................................................134
10.8.4 When the error message range is not between ORA-20000 and ORA-20999 =>.................135
10.9 Restarting the Process................................................................................................................. 135
10.10 Tables Modified by DQT............................................................................................................. 135
12.5.7 To Refresh Hash Keys for a Range of Profiles Without Submitting Them for Matching.........146
12.5.8 Verify Successful Completion................................................................................................ 146
12.6 Running the Hash Process Using ONE Point Application............................................................146
12.7 Monitoring the Hash Process....................................................................................................... 147
12.7.1 View Progress........................................................................................................................ 147
12.8 Analyze Table............................................................................................................................... 148
12.9 Troubleshooting............................................................................................................................ 148
12.9.1 Rollback Segment Errors....................................................................................................... 148
12.9.2 The Logging option is required. A log file name must be passed...........................................149
12.9.3 Hash Key Configuration ID option argument is required........................................................149
12.9.4 Hash Key Configuration Code argument is required..............................................................149
12.10 Restarting the Process............................................................................................................... 149
12.11 Tables Modified by the Hash Process.........................................................................................149
13. The Match Utility (NPMU)................................................................................................................ 150
13.1 Overview...................................................................................................................................... 150
13.1.1 Pre-process........................................................................................................................... 150
13.1.2 Match..................................................................................................................................... 151
13.1.3 What Match Does.................................................................................................................. 152
13.1.4 What Match Requires............................................................................................................ 152
13.2 Running the Match Utility............................................................................................................. 153
13.2.1 Running the Match from Command Prompt...........................................................................153
13.3 Running the Match Process Using ONE Point Application...........................................................156
13.4 Analyze the Results...................................................................................................................... 157
13.4.1 Viewing Log Files................................................................................................................... 157
13.4.2 Debugging Using the -d Option..............................................................................................159
13.5 Analyze Table............................................................................................................................... 159
13.6 Monitoring the Status of Match Utility........................................................................................... 160
13.7 Troubleshooting............................................................................................................................ 161
13.7.1 Process Slows Down............................................................................................................. 161
13.7.2 Pre-process mode does not support the following options [-r,-s,-q,-t, -m]..............................161
13.7.3 The Logging option is required..............................................................................................161
13.7.4 The Rule File option is required.............................................................................................162
13.7.5 The Estimated Volume (-e) option argument is required........................................................162
13.8 Restarting Match Utility................................................................................................................ 162
13.8.1 Restarting the Match Pre-process......................................................................................... 162
13.8.2 Restarting the Match Process................................................................................................163
Explanation....................................................................................................................................... 318
Solution............................................................................................................................................. 318
Appendix E: Configuration Assistant Timeout Errors........................................................................319
Run Time Errors................................................................................................................................... 319
Exceptions........................................................................................................................................ 319
Explanation....................................................................................................................................... 321
Solution............................................................................................................................................. 321
Appendix E: ACE/IACE Postalsoft Errors............................................................................................ 322
Run Time errors................................................................................................................................... 322
Exception: open (Engine) failed, check ............................................................................................ 322
Explanation 1.................................................................................................................................... 322
Solution 1.......................................................................................................................................... 322
Explanation 2.................................................................................................................................... 322
Solution 2.......................................................................................................................................... 322
Explanation 3.................................................................................................................................... 322
Solution 3.......................................................................................................................................... 322
Error: DIRECTORY OUT OF DATE.................................................................................................. 322
Explanation....................................................................................................................................... 323
Solution............................................................................................................................................. 323
Error: Failed to initialize config file : ace.cfg.........................................................................................323
File:\cdis\wkenv\serverapps\nucleuspharma_x3.1.0.latest\aceinterface\src\lacemainapp.cpp Line:
192.................................................................................................................................................... 323
Explanation....................................................................................................................................... 323
Solution............................................................................................................................................. 323
No Address standardization option selected. NP_EXCLUDE_ADDRESS_STD was set.....................323
Explanation....................................................................................................................................... 323
Solution............................................................................................................................................. 323
Standardization Error Codes................................................................................................................ 323
Appendix F: IACE Configuration File................................................................................................... 325
Ace.cfg................................................................................................................................................. 325
iace_usa.cfg......................................................................................................................................... 326
1. PUBLICATION RECORD
Revision Publication Date Overview of Revisions
1 May 11, 2015
Upon approval of this document, Revision 1 is the current version of the IMS Health Nucleus 360 5.2.0
Operation Instructions Operations Guide. Please destroy or archive all previous revisions.
2. INTRODUCTION
Any text that you must interpret before you enter it is in double-underlined
boldface.
Replace <YYYY_MM Curr> with the current production cycles year and month.
Replace <YYYY_MM Prev> with the previous production cycles year and month.
Commands may span multiple lines. The fact that a command is on multiple lines
does not necessarily mean that the Enter key should be pressed. The symbol
denotes when to press Enter.
Text that is boxed and not shaded usually represents output initiated by a session; it does not represent
an interactive session.
3. TEXT CONVENTIONS
In this document, the following text conventions are used:
Text that is in bold represents the following elements:
The names of screen fields, tabs, buttons, and menu options. For example, Affiliations
(tab), File (menu), OK (button).
File names, script names, and folder and directory names. For example, pracval.exe,
c:\Program Files
Prompts in user input. For example, sql>select length(...
Commands and command syntax. For example, The UNIX ps -ef | grep orbixd
command.
Text that is italicized represents specific field values or literals such as passwords.
Quotation marks are used to identify messages.
Sample code is in fixed width font (Courier). For example, select length(
Text that is blue and underlined represents a hyperlink to another section of the current document or
to another document.
Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
SQL>
Application
Directory Name Description
Layer
Server Layer \archives Created under the installation root specified by the user during the
installation.
This file system contains all the files processed during a set
timeframe. The length of time that files reside in this area is
determined based on space availability and file system size. Files
should not be removed from this file system until they have been
archived to tape.
Only third-party import data files that have been successfully
processed are compressed and moved to a holding area
(\archives\np) under this directory.
Subfolders such as \archives\np\database and \archives\np\log
are available to the user to back up any database or log files.
Server Layer \daily_bck Created under the installation root specified by the user during the
installation. This holding area and its subfolders are available to
the user to back up any daily database or log files.
Server Layer \drte_install\log Created during the server layer installation process. Houses the
logs generated at that time.
Server Layer \np Created under the installation root specified by the user during
installation. It contains all the server layer processes. The various
subdirectories are defined later in this section.
Subfolder Contents
bin Server Layer processes and its
configuration files
Wrapped run scripts
Database connection details file (dbcfg.txt)
Real-time data import transformation files
database Server Layer processes that are implemented
using Oracle objects such as stored
procedures and functions are installed from this
holding area during the installation.
datasource Scripts to install the third party data sources
and their configuration (such as Match/ID
Assign rules, Merge rules, Standardization
patterns, External code values) are stored
under subfolders of this directory.
This directory and its subfolders are used only
during server layer installation.
Application
Directory Name Description
Layer
defs This directory and its subfolders are used to
hold additional configuration files.
Subfolder defs\ace holds the Postalsoft IACE
configuration files.
Subfolder defs\xtelligent holds the raw
configuration and metadata files that are
required for Xtelligent Adapter installation. A
copy of these raw files will be made and
custom configured during the adapter
installation. Final configured files are stored in
the Xtelligent Adapter installation directories
such as:
<Xtelligent Adapter
Root>\<Prod>\DataInterfaces\Interfaces\Confi
guration
<Xtelligent Adapter
Root>\<Prod>\DataInterfaces\Interfaces\Meta
data
<Xtelligent Adapter
Root>\<Prod>\DataInterfaces\Interfaces\xsl
lib Stores shared libraries. There are none for this
version.
log Application logs of Server Layer processes will
be created in this directory.
testdata Stores test data files that are used to validate
the Postalsoft installation.
utilities\migr Holds the installer and scripts that are applied
ation to the source database to perform the data
migration.
Application
Directory Name Description
Layer
Server Layer \third_party\np Created under the installation root specified by the user during the
installation. It houses the third-party data files.
Subfolder Contents
in\<DATA_SOURCE> Stores data delivered to Nucleus 360
that is not made available through
database tables. Directories
under
\third_party\np\in\<data_source>
follow a YYYY_MM format (YYYY being
the year, and MM the month,
represented by its constituent data
files).
out\<DATA_SOURCE> Stores Nucleus 360 extracts.
Directories
under
\third_party\np\out\<data_source>
follow a YYYY_MM format (YYYY being
the year, and MM the month,
represented by its constituent data
files).
Please see the Architecture Document for sizing requirements for these file systems.
5. OPERATION CHECKLIST
This section outlines tasks that need to be executed for day-to-day operational activities associated with
Nucleus 360.
5.2.1.1.1 Optimizations
The following optimizations can be applied to boost the throughput of the Data Import process during an
initial data load.
As a validation step, the Data Import process performs a database lookup to check the existence of
the customer profile in the Nucleus 360 system. As it is known that a customer will not exist during the
IDL, the process can be configured to perform this check one time for all the profiles of the data
source, using Configuration Assistant. (Under Data Sources, locate the data source you have
selected for IDL, click Mappings, click the particular mapping, then Fields, then Tasks. On the Tasks
window, select the Configuration Settings task and enable the Profile ID Cache argument.) This
optimization is not recommended for ongoing data processing.
The Nucleus 360 core customer tables are built with indexes to facilitate efficient data search
operations by various server layer, middle tier, and GUI processes. These indexes can be dropped to
maximize the data load performance and can be recreated.
After processing all the data sources, execute the following processes.
Process Description Configuration Process
Manual DQT Index
Create the DQT table indexes Create Indexes
Maintenance
Consolidated Customer View generation (CCV) Manual - Process Consolidated Customer
Management View
Post CCV database maintenance Manual - Process Post CCV DB
Management Maintenance
Merge Manual - Process Merge Customer/
Management Merge Address
Post Merge database maintenance Manual - Process Post Merge DB
Management maintenance
Real Time Data Processing Services: Configuration to process the imported real-time data
through various server layer components.
Real Time Data Export: Configuration to publish the processed data to the output message
queues.
Real Time Standardization Service: Address correction service to Customer Maintenance users.
Real Time PracVer Services: Practitioner verification services to publish practitioners for external
vendor verification and to import verified practitioners from the message queues.
5.2.2.1 Optimization
During the initial data load, if the source is fed through the real-time message queues, launch only the
Real Time Data Import configuration to import the volume data. The core table indexes can be dropped
to improve the load throughput as explained in the batch data processing section 5.2.1. Once the load is
complete, recreate the indexes.
5.2.2.2 Sample Real Time Initial Data Load and Processes Execution Sequence
The following example lists the execution sequence of processes to be run to perform an IDL from the
message queues.
5.2.2.3 Sample Real Time Ongoing Data Load and Processes Execution Sequence
The following example lists the execution sequence of processes to be run to perform an ongoing data
load from the message queues.
3. If any *.bad files are generated due to rejected records, re-run the import process after doing the
following:
Rename the existing .bad files.
Ensure that the errors in the .bad files have been resolved.
$ cd %DRTE_NP_BIN%
$ npdataimport -m<mapping id> -o<old file name> -n<renamed bad file> -d<old
bad file> -w<new bad file> -l<log file>
13. Start the ID Assign process to assign NUCLEUS IDs for the profiles in the
NUC_RESOLUTION_PENDING table with the following command:
$ cd $DRTE_NP_BIN
$ run_idassign
14. Execute the Reverse Link Creator process. This process prepares the unresolved queue to be
viewable on the Customer Resolution GUI for profiles that were not resolved automatically for the ID
Assign process.
$ cd $DRTE_NP_BIN
$ runstoredproc -sNP_ReverseLinkCreator
15. After finishing the ID Assign Process, analyze the tables by executing the following command:
$ cd %DRTE_NP_BIN%
$ runstoredproc -spkg_db_maintenance.process_maintenance(17)
16. Before starting the Consolidate Customer View Process, analyze the table by executing the following
command:
$ cd %DRTE_NP_BIN%
$ runstoredproc -s pkg_db_maintenance.process_maintenance(31)
17. Consolidated Customer view represents the best customer view that is generated from the customer
details that are available through different data sources for a NUCLEUS ID. To start the process, use
the following command:
$ cd %DRTE_NP_BIN%
$ run_bob
$ run_bob -affiliation
18. After running the CCV process, analyze the related tables by executing the following command:
$ cd %DRTE_NP_BIN%
$ runstoredproc spkg_db_maintenance.process_maintenance(6)
6.1 Overview
The Nucleus 360 middle tier (MT) is one of the standard tiers within the application architecture. The
middle tier handles the logic and rules that govern the way the business processes operate. It also
handles all interactions between the Presentation Layer Tier (GUI) and the Database/Server Tier. It
handles requests from the Presentation Layer to retrieve data from, or modify data in, the Nucleus 360
database, which is part of the Server Tier. The following figure illustrates the multiple tiers and the
interactions among them.
6.2.1 Description
The purpose and responsibility of the Common Server is to provide all of the database access/interaction
functionality that is common to several or all of the applications in the IMS Health Home Office Application
Suite. For example, the Common Server contains the security login database functionality, since this
functionality is required by all of the applications in the IMS Health Home Office Application Suite.
6.2.2 Prerequisites
To start the Common Server manually:
The Orbix Daemon must be running.
Connectivity must be available to the Nucleus 360 database.
6.3.1 Description
The Customer Resolution server (saiCRSrv) encapsulates most of the Customer Resolution business
logic. It also provides the database interface between the Customer Resolution GUI and the database.
The most important function of the Customer Resolution server is to execute unresolved profile matches,
customer moves, and profile classifications that are requested from the GUI.
6.3.2 Prerequisites
To start the Customer Resolution Server manually:
The Orbix Daemon must be running.
Connectivity must be available to the Nucleus 360 database.
6.4.1 Description
The Configuration Assistant server (saiCASrv) provides the database interface between the
Configuration Assistant GUI and the database. The Configuration Assistant Server retrieves and saves
the mappings and other Nucleus 360 configuration information.
6.4.2 Prerequisites
To start the Configuration Assistant Server manually:
The Orbix Daemon must be running.
Connectivity must be available to the Nucleus 360 database.
6.5.1 Description
The Practitioner Verification server (saiPVSrv) encapsulates the Practitioner Verification business logic. It
also provides the database interface between the Practitioner Verification clients and the database. The
Practitioner Verification Server saves the decision matrices defined in the GUI into the database and
performs the sampling status changes and practitioner re-verification requests that are initiated in the
GUI.
6.5.2 Prerequisites
To start the Practitioner Verification Server manually:
The Orbix Daemon must be running.
Connectivity must be available to the Nucleus 360 database.
The Connection Failure screen provides basic information regarding the issue:
The type of middle tier server that the client application is attempting to access (in the above
example, Common).
The Orbix-registered name of the middle tier server that the client application is attempting to access
(in the above example, saiCommonSrv).
The machine name of the host that the client application is attempting to access (in the above
example, Z_APP_01_NT).
The Connection Failure screen does not include details regarding the specific nature of the issue. For
specific information, click the Details button. An error screen similar to the one in Figure 3 will display:
The saiCommonForms error screen describes the error in greater detail, including:
The source and location within the application code where the issue occurs.
A description of the error, which can be used to help resolve the issue. In the above example, the
description shows that the error is a CORBA (Orbix) error and displays the minor code for that error
(10085). The minor code is a critical piece of information for troubleshooting Orbix errors.
6.6.1.1.1 Description
Windows 2008: The standard Nucleus 360 5.2 home directory is C:\Program Files
(x86)\Cegedim\Nucleus360\5.2.0\MT. All of the directories and files that are specific to the middle tier
are contained within this directory.
Note: The Orbix installation and Oracle Client installation are not installed in the Nucleus 360 home
directory. The two software packages are installed in separate directories.
6.6.1.1.2 Files
The Nucleus 360 Middle Tier home directory contains only one Uninst.isu file (for uninstall program) and
the following subdirectories.
6.6.1.1.3 Subdirectories
The Nucleus 360 home directory contains the following subdirectories:
Subdirectory Name Description
bin Nucleus 360 middle tier executables and supporting files
log Nucleus 360 middle tier log and error files
6.6.1.2.1 Description
The bin directory exists directly under the Nucleus 360 home directory. This directory contains the
Nucleus 360 middle tier executable files, as well as other supporting files that are needed for the
executable files to run properly.
6.6.1.2.2 Files
The bin directory contains the following files:
File Name Description
saiCommonSrv.exe Executable file for the middle tier Common Server
saiCASrv.exe Executable file for the middle tier Configuration Assistant Server
saiCRSrv.exe Executable file for the middle tier Customer Resolution Server
saiPVSrv.exe Executable file for the middle tier Practitioner Verification Server
6.6.1.2.3 Subdirectories
The bin directory contains no subdirectories.
6.6.1.3.1 Description
The log directory exists directly under the home directory. This directory contains all log files generated
by the middle tier servers, depending on whether or not logging is enabled. All of the log files are in plain
text format and can be viewed with any standard text editor.
6.6.1.3.2 Files
The log directory contains the following middle tier server log files:
File Name Description
SYSTEM.log Log file generated by the core object framework library. Since the core object
framework library is part of all servers, all servers write to this file.
COMMON.log Log file generated by the middle tier Common Server (saiCommonSrv.exe)
Customer.log Log file generated by the middle tier Customer Resolution Server
(saiCRSrv.exe)
Note: The log directory will not contain the files mentioned above immediately after a clean installation of
the middle tier. The middle tier servers generate the above files, and they will not be created until the
middle tier server executables are executed. In addition, logging must be enabled for the server in the
dbcfg.txt file to generate a log.
6.7.1.1 Changing the connection timeout (-x) argument of the Orbix Daemon
The Orbix Daemon takes several optional command line arguments, one of which is the connection
timeout argument. This argument controls the length of time (in seconds) to establish that a connection to
the Orbix Daemon is fully operational. To specify this argument for the Orbix Daemon as it is running as a
Windows 2008 Service, do the following:
1. Click the Windows Start button, select Run, and in the Open field on the Run window, type regedit
and click the OK button. The Registry Editor Utility program starts.
2. In the left-hand pane of the Registry Editor, drill down to the following location: /My
Computer/HKEY_LOCAL_MACHINE/SYSTEM/CurrentControlSet/Services/Orbix Daemon,
highlighting the Orbix Daemon key.
3. In the right-hand pane, double-click the value titled ImagePath. The Edit String window for this value
opens.
4. In the Value data field on the Edit String window, append the following:
-x <numbervalue>
where <numbervalue> is the desired literal integer value to be used as the argument
5. Click the OK button on the Edit String window and then close the Registry Editor Utility program.
Once the above steps have been completed, the next time the Orbix Daemon service is started, it will use
the specified value for the connection timeout argument.
Note: For further information regarding this parameter as well as the other command line parameters that
the Orbix Daemon accepts, please consult the Orbix Administrators Guide C++ Edition electronic
document (orbix33cxx_admin.pdf), which is included with the Orbix 3.3.5 for Windows application.
To have the Orbix Daemon launch automatically when a user logs into Windows 2008 on the middle tier
server machine, create a shortcut for orbixd.exe, then drag and drop the shortcut file to the startup folder.
This adds the executable file to the list of programs that will be launched on system startup.
2. On a client machine, launch the appropriate IMS Health application. The middle tier server
executables that are required by that application will automatically be launched as normal console
applications.
2. In the Connect window, type the IP address or the name of the machine on which the Orbix Daemon
is running in the Location of Orbix Daemon field.
Note: If the Orbix Server Manager is being executed from the same Windows machine that the Orbix
Daemon is on, this field can be left blank.
3. Click the OK button. The Connect window disappear, and an entry for the machine, as well as entries
for the middle tier server executables, is displayed in the left-hand pane of the Orbix Server Manager.
Refer to Figure 6 for details.
The Orbix Server Manager will look similar to the following:
The Orbix Server Manager shows all of the middle tier server executables that are registered on the
machine in the left-hand pane. It also shows which server executables are currently running and which
are not. A running server executable will have an animated icon next to it (the notches on the icon will be
rotating clockwise). An inactive server executable will have a static icon.
6.7.5.3 Checking the Status of a Server via Command Line (Command Prompt Session)
Orbix provides the command psit to provide a listing of all the active (running) server executables on a
Windows machine. It generates output similar to Figure 7.
6.7.6.1 Starting a Server Manually as a Windows 2008 Service via Orbix Server Manager
1. Ensure that the Orbix Daemon is currently running as a Windows 2008 Service. Refer to section 6.7.1
for details.
2. If the Orbix Server Manager is not already running and connected, launch the Orbix Server Manager
and connect it to display the middle tier server executables. Refer to section 6.7.5.1.
3. In the left-hand pane of the Orbix Server Manager, click the name of the middle tier server executable
that is to be started. The name is highlighted and information about that particular server is displayed
in the right-hand pane.
4. On the toolbar of the Orbix Server Manager, click the Launch server button (the button with a green
traffic signal). The specified server launches, and its icon in the left-hand pane displays a green traffic
light to indicate that the server is running.
6.7.6.2 Starting a Server Manually as a Normal Console Application via Orbix Server
Manager
To start a server manually using the Orbix Server Manager, and to have that server run as a normal
console application, do the following:
1. Ensure that the Orbix Daemon is currently running as a normal console application. Refer to section
6.7.2.
2. If the Orbix Server Manager is not already running and connected, launch the Orbix Server Manager
and connect it to display the middle tier server executables. Refer to section 6.7.5.1.
3. In the left-hand pane of the Orbix Server Manager, click the name of the middle tier server executable
that is to be started. The name is highlighted, and information about that particular server is displayed
in the right-hand pane.
4. On the toolbar of the Orbix Server Manager, click the Launch server button (the button with a green
traffic signal). The specified server launches, and its icon in the left-hand pane rotates to indicate that
the server is now running.
6.8 Troubleshooting
If the issue still exists, change the mode of the servers so that they will run as normal
console applications. Then continue troubleshooting using the normal console application
instructions in step 3.
If the issue is resolved, troubleshooting is complete.
If the server dies not restart successfully, continue troubleshooting using the manual
instructions following.
b. If the server in question is not running, attempt to start it via the Orbix Server Manager.
If the server starts successfully, check the issue again.
If the issue still exists, change the mode of the servers so that they will run as normal
console applications. Then continue troubleshooting using the normal console application
instructions in step 3.
If the issue is resolved, troubleshooting is complete.
If the server did not restart successfully, continue troubleshooting using the manual
instructions in step 4.
3. If the servers are running as normal console applications, check the status of the servers using the
Orbix psit command, and by manually inspecting the console window of each server executable.
If the server in question has an error message displayed in its console window, note the exact
error (take a screenshot of the console window). Then close the console window for that
server.
Attempt to restart the server again via the Orbix Server Manager.
If the server restarts successfully, check the issue again.
If the issue still exists, note all relevant information regarding the issue and contact IMS
Health.
If the issue is resolved, troubleshooting is complete.
If the server in question did not restart successfully, continue troubleshooting using the
manual instructions in step 4.
4. Troubleshoot the server manually as follows:
If the server in question is still running, shut down the server.
Attempt to launch the server via command line.
If the server restarts successfully, check the issue again.
If the issue still exists, note all relevant information regarding the issue and contact
IMS Health.
If the issue is resolved, troubleshooting is complete.
If the server in question did not restart successfully, note all relevant information
regarding the issue and contact IMS Health.
7. STANDARDIZER SERVER
Standardizer Server is used by applications that require real-time address/name standardization, such as
Customer Maintenance. The server can accept and process multiple requests from the client applications
concurrently using TCP/IP queuing mechanisms. The server communicates with the client processes
using the TCP/IP protocol.
SYNOPSIS
run_stddaemon [--ip=ip_adress] [--port=port_number] -[ltshv]
DESCRIPTION
This script can Launch/Stop/Test the standardizer daemon.
OPTIONS
--launch, -l
Launch the standardizer daemon
--stop, -s
Stop the standardizer daemon
--test, -t
Test the standardizer daemon. Using this option the user can
dynamically enter
an address to validate the standardized output from the daemon. It can
also be used to get the service statistics.
--ip, -i
IP Address on which the daemon resides. Default is localhost.
ex: --ip=127.0.0.1
--port, -p
Port number on which the communication takes place. Default is 49152.
ex: --port=49152
--debug, -d
Debug level. (0)No logging, (1)Errors, (2)Terse (3/4)Log all. Default
is 2
--help, -h, -?
Usage. Displays the usage of this script. Same as the one that
you read now.
--version, -v
Displays version information.
7.2.3 Output
The server logs all the requests it receives in a log file, which is named in the format
np_stddaemon_<timestamp>.log.
7.3 Running the Standardizer Daemon via the ONE Point Application
To launch the standardization daemon via the ONE Point application, right-click the Real Time
Standardization Service and select Start. To shut down the standardization daemon, right-click the Real
Time Standardization Service and select Stop.
A. Address standardization
B. Name standardization
3. Daemon statistics
4. Exit testing
******************************************************************
Enter a valid choice:
Enter 4 to exit.
7.5 Troubleshooting
Note:
Refer to Appendix B for database-related errors.
Refer to Appendix E for ACE/IACE errors.
8. WEB SERVER
8.1 Overview
The Nucleus 360 Web Server is one of the standard tiers within the application architecture. The Web
Server handles the presentation layer for Web applications such as Customer Maintenance. It also
handles all interactions between the Web browser and the database interface.
8.2.1 Description
The Tomcat Web Server must be started for the Customer Maintenance Web GUI. This server can be
started as a service or as a console application (refer to section 8.4). After the Web Server is started, the
Customer Maintenance Web GUI is accessed through the users Web browser by entering the URL in the
browsers address bar.
http://<machine-name>:8080/CustMaint/Jsp/Login.jsp
Note: The user must specify the port number in the browsers address bar. By default, Tomcat Web
Server is configured to use port 8080.
8.3.1.1.1 Description
Based on the installation directory provided by the user, on installation, the Home Directory will have all
the files and directories that are specific to the Tomcat Server. Generally, by default installation, it should
be C:\Program Files (x86)\Apache Software Foundation\Tomcat 5.5. This should be referred as
<Tomcat root>.
8.3.1.1.2 Subdirectories
Assuming that the installation directory is C:\ Program Files
(x86)\Cegedim\Nucleus360\5.2.0\GUI\CustomerMaintenance, this directory contains the following
subdirectories:
Subdirectory Name Description
Jweb Contains the jsp, jar, images, logs, and other directories of Customer
Maintenance
<INSTALL_PATH>\CustomerMaintenance\jweb\log
1. Go to the Run command and type services.msc. When the services window opens, select Apache
Tomcat service and stop the service if it is running.
2. If the Customer Maintenance application is hosted on a domain, go to <Tomcat
root>\work\Catalina\<domain URL>, e.g. qa-nucleus-
cm.emea.cegedimdendrite.com>\jweb\org\apache\jsp\CustMaint\Jsp folder. Otherwise, go to
<Tomcat root>\work\Catalina\localhost\jweb\org\apache\jsp\CustMaint\Jsp folder. Delete all the
files from this folder.
Similarly go to the <Tomcat root>\logs folder and archive or delete all the log files. Also archive or
delete the application log files from <INSTALL_PATH>\CustomerMaintenance\jweb\log folder.
3. Start the Apache Tomcat service from the service window again.
2. The Apache Tomcat icon is created in the System Tray (right lower corner of the screen). Right-
click the Apache Tomcat icon and click the Start service menu item.
Go to the system tray, right-click the Apache Tomcat icon , and click the Stop service menu item.
This will stop the Tomcat server.
8.5 Troubleshooting
9.1 Overview
The Data Import process is responsible for loading all customer and affiliation data entities into the
Nucleus 360 system that originate from external systems, such as SFA/CRM; a legacy or internal system,
or from third-party data providers such as the AMA or DEA. The Data Import process supports reading
data from fixed length, delimited, and XML-based files as well as subscribing messages from a Message
Queue in real time. Depending on the data stream being used, the Data Import process can process the
data using its Batch or Transactional Comparison modes for greater efficiency.
Once data has been received from the input source, the data must be validated, compared, and
processed. The Data Import process supports the following modes of operations:
Batch comparison mode processing Two files having the same format are required for
processing the current version and a previous version. The two files are verified for correctness,
and each transaction in the current and previous files is intelligently compared field by field where
changes are identified and transaction types (Add, Update, Delete, or No Change) are determined for
processing and passed along to the Transaction Manager module.
If the transaction was successfully processed by the Transaction Manager module, the process is
repeated for the next transaction within the dataset. If the process is unsuccessful, then both the
current and previous transactions are written to an error file along with the error description, and the
process is repeated for the next transaction within the dataset.
This mode accepts profile data received in fixed-length, delimited, or flat file formats encoded using
the ASCII character set.
Transactional mode processing A single file is verified for correctness and processed based on
pre-identified transactions within the file against the database records. The Import application
identifies Add, Update, and No Change transaction types, while the data source identifies all Delete
transactions. Each transaction is passed along to the Transaction Manager module.
If the transaction was successfully processed by the Transaction Manager module, the process is
repeated for the next transaction within the dataset. If the process is unsuccessful, then the
transaction is written to an error file along with the error description, and the process is repeated for
the next transaction within the dataset.
This mode accepts profile data received in fixed length, delimited, flat, or XML file formats encoded
using the ASCII character set.
Real-time mode processing Transactions posted to message queues by external sources are
processed by the Import application. Each transaction within the pre-identified transaction set is
optionally transformed using an XSLT parser and verified for correctness using a Schema Definition
file (XSD). The Import application identifies Add, Update, and No Change transaction types, while the
message identifies all Delete transactions. Each transaction is passed along to the Transaction
Manager module.
If the transaction was successfully processed by the Transaction Manager module, the process is
repeated for the next transaction within the dataset. If the process is unsuccessful, then the
transaction is written to an error file/exception queue along with the error description and the
process is repeated for the next transaction within the dataset. Whether or not transactions are
processed successfully, they are removed from the message queue.
The Real-time Mode accepts profile data received in XML format encoded using the ASCII character
set.
Data
and
Data Bad Old Data New Data New Bad File Old Bad File Data and Bad File
Source File File Name File Name Name Name Directory
Name
Prefix
newbadAMA.sort oldbadAMA.sort <DRTE_THIRD_PARTY
AMA AMA AMA<*>.old AMA<*>.new
>\np\in\ama
newbadAOA.sort oldbadAOA.sort <DRTE_THIRD_PARTY
AOA AOA AOA<*>.old AOA<*>.new
>\np\in\aoa
newbadDEA.sort oldbadDEA.sort <DRTE_THIRD_PARTY
DEA DEA DEA<*>.old DEA<*>.new
>\np\in\dea
Prefix: The default file name Prefix can be changed for a data source using the p option of the
run_dataimport.exe script.
Third Party Directory: The default third party directory name for a data source can be changed by
setting the directory name in the environment variable <Prefix>_DIR.
Example:
a. Overriding the file naming convention of a standard data source. (File name prefix
change).
If the AMA data file name is prefixed with the letter F (say F1.old and F2.new), use the p
option to override the default prefix AMA to F. A sample command line is provided here:
$ cd %DRTE_NP_BIN%
$ run_dataimport.exe -m=1000 -a -p=F
b. Overriding the data file directory of a standard data source without changing the file
naming convention.
To override the data file location, create an environment variable using the naming convention
Prefix_DIR, where Prefix is the data source file name prefix. As an example, to change the
default location of the AMA file, define an environment variable with the name AMA_DIR and
set the variable with the directory path of the file relative to the third party input directory
<DRTE_THIRD_PARTY>\np\in. Add the environment variable to the np_env.bat file to
export it automatically each time the np_env.bat file is run.
If the new AMA data file location is <DRTE_THIRD_PARTY>\np\in\data, open the
np_env.bat and add the following command to the end of the file. Save, close, and execute
the np_env.bat to export the environment variables.
Set AMA_DIR=data
The run_dataimport.exe script will now use the AMA data files from the new location data.
$ cd %DRTE_NP_BIN%
$ ..\..\np_env.bat
$ run_dataimport.exe -m=1000 -a
c. Overriding the data file directory of a standard data source and the file naming
convention.
The file name prefix is overridden using the p option of the run script. To change the data file
directory along with the file name, create an environment variable using the new prefix and
set the variable with the directory path of the file relative to the third party input directory
<DRTE_THIRD_PARTY>\np\in. The environment variable naming convention is Prefix_DIR.
Add the environment variable to the np_env.bat file to export it automatically each time the
np_env.bat file is run.
As an example, if the AMA data file name is prefixed with the letter F (say F1.old and
F2.new), use the p option to override the default prefix AMA to F. If the new AMA data
file location is <DRTE_THIRD_PARTY>\np\in\data, open the np_env.bat and add the
following command to the end of the file. Save, close, and execute the np_env.bat to export
the environment variables.
Set AMA_DIR=data
The run_dataimport.exe script will now use the AMA data files from the new location data.
$ cd %DRTE_NP_BIN%
$ ..\..\np_env.bat
$ run_dataimport.exe -m=1000 -a -p=F
d. Accessing data files from a different drive
If the third party data files are located in a drive that is different from the one where the Server
Layer processes are installed, set the DRTE_THIRD_PARTY environment variable with the
new drive name. DRTE_THIRD_PARTY environment variable is defined in the np_env.bat
file. As an example, if the Server Layer processes are installed on the C drive and the data
files are located on the D drive, set the DRTE_THIRD_PARTY environment variable with D:
in the np_env.bat file. The run script always looks for np\in directory beneath the directory
that is set in the DRTE_THIRD_PARTY environment variable. In our example, the AMA data
file will be picked from the D:\np\in\ama directory. If the ama folder is created directly under
the D drive as D:\ama, set the AMA_DIR environment variable with the value ..\..\ama.
Set DRTE_THIRD_PARTY=D:
Set AMA_DIR=..\..\ama
Note: By default, the archived data files are stored in the %DRTE_ROOT%\archives\np
directory, where DRTE_ROOT is defined in np_env.bat file. To store the archived files in a
different drive, set the new drive name in the DRTE_ROOT variable. Make sure that the new
drive has the following directory structure under it: DriveName:\archives\np.
e. Defining the file name prefix and the data file directory of a new data source
For a new data source, define the file name prefix using the p option. Using the prefix,
create an environment variable and then set the variable with the directory path of the data
file relative to the third party input directory <DRTE_THIRD_PARTY>\np\in. The variable
naming convention is Prefix_DIR. Add the environment variable to the np_env.bat file to
export it automatically each time the np_env.bat file is run.
As an example, you want to import a new data source State License whose data files are
named using the convention STL_MMDDYYYY.old and STL_MMDDYYYY.new. It is located
in the directory %DRTE_THIRD_PARTY%\np\in\stl. Open the np_env.bat and add the
following command to the end of the file. Save, close, and execute the np_env.bat to export
the environment variables.
set STL_DIR=stl
Following is a sample command line to execute the run script.
$ cd %DRTE_NP_BIN%
$ run_dataimport.exe -m=2000 -a -p=stl
The last modified date of the new file is set with the system timestamp to make it more recent than
the old file.
The following operations are performed if the Data Import application completes successfully:
A copy of the old and the new file is compressed using the WinZip command line utility
(Version Winzip Pro v11 with the command line add-on support). The zipped file name will be of
the format:
archive.<Prefix>.YYYYMMDD_HHMISS.zip.
For example, the archive file name of a DEA data load will be
archive.DEA.20080327_155224.zip. The zipped file will be archived in the
<DRTE_ROOT>\archives\np directory.
The old data file is deleted.
The new data file extension is renamed to .old, to be used as an old file in the next run.
The database maintenance process can be optionally invoked to analyze the tables that
are impacted by the data import process.
An exit code of zero 0 is returned from the run script if the Data Import process
completes successfully. Otherwise a non-zero value is returned.
Other features of the run script are as follows:
The bad file names are generated based on the data source and the files will be created in
<DRTE_THIRD_PARTY>\np\in\<data source> directory. Refer to the above table for format details.
The log file name will be generated based on the data source and placed in
<DRTE_THIRD_PARTY>\np\log directory.
Format: npdataimport_<Prefix>.log
Example: npdataimport_DEA.log will be generated for the DEA data source.
An error message similar to the following is displayed if the directories have already expired. The error
message will always contain the following text: check paths in ace.cfg and dir expiration dates. The Data
Import process should not be run until the Postalsoft directory updates are installed.
The specified ZIP+4 directory has expired. Exception: ACE_OPEN failed, check
paths in ace.cfg and dir expiration dates.
An error message similar to the following is displayed if the directories are about to expire. The error
message will always contain the following text: check paths in configuration file and dir expiration dates.
The Data Import process should not be run until the Postalsoft directory updates are installed.
Open (Engine) failed, check paths in configuration file and dir expiration
dates. ZIP+4 directory expires in n days.
-m Mapping ID. To get the mapping ID of a data source, execute the following command at the
prompt:
run_dataimport. exe -?
1
What might be considered a normal transaction count for one data source might not be a normal transaction count for another.
Being able to identify what is a normal transaction count will come with experience. Excessively high or low transaction counts do
not constitute an error, but there is most likely an exceptional condition occurring. Investigation into the cause of the abnormal
transaction count might be necessary.
IMS Health Operations Guide Page 72 of 364
Rev. 1 October 1, 2015 371339087.doc
2015 IMS Health, Proprietary and Confidential Information
Nucleus 360 Version 5.2.0
Operation Instructions
$ run_dataimport.exe -m=1010 -a
9.9.1 Overview
The OneKey source is configured as a third party data source and data is loaded into Nucleus 360 Core
tables using the Data Import Process. Unlike other data sources, OneKey provides international data.
Therefore the address standardization feature provided by ACE Postalsoft engine is disabled for this data
source.
A separate ONE Point configuration file, BMS_Onepoint.xml, has the configurations for loading the
OneKey codes and data through the Data Import process.
OneKey data and codes can be loaded to staging tables by running either the Automatic OneKey
Staging Import and Code processing or Manual OneKey Staging Import and Code processing
configurations.
OneKey data should be loaded to core tables after the OneKey data and codes are loaded to staging
tables either by running the Automatic OneKey Data Processing or Manual OneKey Data
Processing configurations in IDL or Incremental modes.
IDL OneKey data load is carried out in two phases.
Automatic OneKey Data Processing (IDL) or Manual OneKey Data Processing (IDL): This
process will load all the OneKey data, including the inactive data, into the Nucleus 360 database.
Automatic OneKey Data Processing (IDL Inactive) or Manual OneKey Data Processing (IDL
Inactive): This process will inactivate the inactive data loaded as part of the IDL. The incremental
mode takes care of deleting/inactivating OneKey data when it is run, with no need to perform any
further steps.
Following are the processes configured for loading OneKey Data and Codes using Manual and Automatic
Data Processing configurations.
DB Connection Verification This process verifies the database connection using the
COF_DBConnect application.
Import OneKey Data to Staging tables Perl process that invokes the Oracle SQL Loader utility to
load OneKey data and codes from flat files received from OneKey source into Temporary staging
tables. Following is the list of staging tables populated by this process.
Staging Table Name Comments
STG_OK_RELATION OneKey Relation data from different countries.
STG_OK_WORKPLACE OneKey Workplace data from different countries.
STG_OK_INDIVIDUAL OneKey Individual data from different countries.
STG_OK_ACTIVITY OneKey Activity data from different countries.
STG_OK_ADDRESS OneKey Address data from different countries.
STG_OK_WORKPLACE_ADDRESS_REL OneKey Address relation data from different countries.
Migrate OK codes from STG to BLT PL/SQL process that loads OneKey code data from the
OneKey codes staging table to the appropriate BLT table. The following shows the association
between the OneKey Codes staging table and the OneKey Codes BLT table.
Staging Table Name Comments BLT Table Name
STG_OK_CODES OneKey code data from BLT_OK_CODES
different countries.
Migrate OK Data from STG to BLT PL/SQL process that loads OneKey Entity data from OneKey
data staging tables to the appropriate BLT tables. The following table shows the association between
OneKey data staging tables and OneKey data BLT tables.
Staging Table Name BLT Table Name
STG_OK_RELATION BLT_OK_RELATION
STG_OK_WORKPLACE BLT_OK_WORKPLACE
STG_OK_INDIVIDUAL BLT_OK_INDIVIDUAL
STG_OK_ACTIVITY BLT_OK_ACTIVITY
STG_OK_ADDRESS BLT_OK_ADDRESS.
STG_OK_WORKPLACE_ADDRESS_REL BLT_OK_WORKPLACE_ADDRESS_REL
Populate code verification table PL/SQL process that evaluates the OneKey codes in the
OneKey code BLT table and populates the qualified codes into the
EXTERNAL_CODES_VERIFICATION table with an APPROVAL_STATUS_CD as Auto Approved
(506905).
Create/Update FND Codes PL/SQL process that evaluates code records with
APPROVAL_STATUS_CD as Auto Approved (506905) or Manually Approved (506904) from
EXTERNAL_CODES_VERIFICATION table, and PROCESSING_STATUS_CD as Not Processed
(506907), determines the type of transaction (Insert/Update/Inactivate) for the code item, and
modifies the respective FND code tables. Following is the list of tables modified by this process.
FND CODE TABLES MODIFIED
FND_CODE_ITEM
FND_CODE_ITEM_STRING
FND_CODE_EXTERNAL_ITEM
FND_CODE_HEADER
FND_CODE_HEADER_STRING
Import Workplace Data Import application configured to load OneKey Workplace data from
OneKey BLT tables to Nucleus 360 core tables. Profiles created for this OneKey entity are classified
as Organization profiles. Following is the list of tables modified by this process.
CORE TABLES MODIFIED
CUS_PRIMARY_ID_XREF
CUS_SECONDARY_ID_XREF
CUS_ORGANIZATION_PROFILE
CUS_ADDRESS
CUS_OPTOUT_PROFILE
CUS_SPECIALTY
CUS_BUSINESS_PERIOD
Import Individual Data Import application configured to load OneKey Individual data from OneKey
BLT tables to Nucleus 360 core tables. Profiles created for this OneKey entity are classified as
Professional profiles. Following is the list of tables modified by this process.
CORE TABLES MODIFIED
CUS_PRIMARY_ID_XREF
CUS_SECONDARY_ID_XREF
CUS_PROFESSIONAL_PROFILE
CUS_ADDRESS
CUS_PROFESSIONAL_DEGREE
CUS_OPTOUT_PROFILE
CUS_SPECIALTY
Import Individual_Relation Data Import application configured to load OneKey Individual Relation
data from OneKey BLT tables to Nucleus 360 core tables. Data from this entity is loaded as
Professional Affiliations into CUS_AFFIL_PROFILE table.
Import Individual_Roles Data Import application configured to load OneKey Individual Roles data
from OneKey BLT tables to Nucleus 360 core tables. Data from this entity is loaded as Professional
Affiliation Roles into CUS_AFFIL_ROLE table.
Import Workplace_Communication Data Import application configured to load OneKey
Workplace Communication data from OneKey BLT tables to Nucleus 360 core tables. Data from this
entity is loaded as Organization Communication data into CUS_COMMUNICATION_PROFILE table.
Import Individual_Communication Data Import application configured to load OneKey Individual
Communication data from OneKey BLT tables to Nucleus 360 core tables. Data from this entity is
loaded as Professional Communication data into CUS_COMMUNICATION_PROFILE table.
Import Workplace_Identifiers - Data Import application configured to load OneKey Workplace
Identifier data from OneKey BLT tables to Nucleus 360 core tables. Data from this entity is loaded as
Organization Identifier data into CUS_SECONDARY_ID_XREF table.
Import Individual_Identifiers - Data Import application configured to load OneKey Individual
Identifier data from OneKey BLT tables to Nucleus 360 core tables. Data from this entity is loaded as
Professional Identifier data into CUS_SECONDARY_ID_XREF table.
Import Workplace_Specialty Data Import application configured to load OneKey Workplace
Specialty data from OneKey BLT tables to Nucleus 360 core tables. Data from this entity is loaded as
Organization Specialty data into CUS_SPECIALTY table.
Import Individual_Specialty Data Import application configured to load OneKey Individual
Specialty data from OneKey BLT tables to Nucleus 360 core tables. Data from this entity is loaded as
Professional Specialty data into CUS_SPECIALTY table.
Import Workplace_Business_Hours Data Import application configured to load OneKey
Workplace Business Hours data from OneKey BLT tables to Nucleus 360 core tables. Data from this
entity is loaded as Workplace Business Hours data into CUS_BUSINESS_HOURS table.
Import Individual_Business_Hours Data Import application configured to load OneKey Individual
Business Hours data from OneKey BLT tables to Nucleus 360 core tables. Data from this entity is
loaded as Individual Business Hours data into CUS_BUSINESS_HOURS table.
Delete Duplicate/Inactive Workplace Data Import application configured to load Inactive OneKey
Workplace profiles from OneKey BLT tables to Nucleus 360 core tables. This process will inactivate
the OneKey organization profiles which were loaded as Active during IDL. CUS_PRIMARY_ID_XREF
table will be modified by this process.
Delete Duplicate/Inactive Individual Data Import application configured to load Inactive OneKey
Individual profiles from OneKey BLT tables to Nucleus 360 core tables. This process will inactivate
the OneKey professional profiles which were loaded as Active during IDL. CUS_PRIMARY_ID_XREF
table will be modified by this process.
2. Specify the path of the sql loader controls files directory for Control Files Directory. A default value
is already available.
3. Specify the path of the OneKey data files directory for Data Files Directory. A default value is already
available.
4. Specify the connection string details of the NP_ADMIN user for Connection String. Replace the
placeholder with the actual connect string details.
5. Click OK to run the process.
6. Check log file <NP_ROOT>\np\log\ok_staging_data_load.log to view the results logged.
2. Keep the default values for Stored procedure name and Log File name parameters.
3. Click OK to run the process.
4. Check log file <NP_ROOT>\np\log\ok_populate_codes_verification.log to view the results logged.
2. Keep the default values for Stored procedure name and Log File name parameters.
3. Click OK to run the process.
4. Check log file <NP_ROOT>\np\log\ok_fnd_codes_create.log to view the results logged.
2. Select a value from the drop down for Mapping. This value determines the mapping to be used by
the process. A default mapping name is already selected.
3. Enter the query file name for Query File Name. The default query filename is provided.
4. Select a value from the drop down for Dry Run. The default value is already selected.
5. Select a value from the drop down for Skip Validation. The default value is already selected.
6. Enter the log file name Log File. The default log file name is provided.
7. Enter the bad file name for Bad File. The default bad file name is provided.
Note: The rejected profiles written to a bad file can be reprocessed after fixing the exception for
failure. To re-process the rejected profiles, refer to troubleshooting section 9.13.22.1.
8. Click OK button to start the Import process.
9. Check the Application log file, <NP_ROOT>/np/log/IDL_ok_npdataimport_Workplace.log for the
data import statistics.
6. Enter the bad file name for Bad File. Default bad file name is provided.
Note: The rejected profiles written to bad file can be reprocessed after fixing the exception for failure.
To re-process the rejected profiles, refer to troubleshooting section 9.13.22.4.
7. Click OK button to start the Import process.
8. Check the Application log file, <NP_ROOT>/np/log/IDL_ok_npdataimport_Individual_Relation.log
for the data import statistics.
5. Select a value from the drop down for Skip Validation. The default value is already selected.
6. Enter the log file name for Log File. Default log file name is provided.
7. Enter the bad file name for Bad File. Default bad file name is provided.
Note: The rejected profiles written to a bad file can be reprocessed after fixing the exception for
failure.To re-process the rejected profiles, refer to the troubleshooting section 9.13.22.8.
8. Click OK button to start the Import process.
9. Check the Application log file,
<NP_ROOT>/np/log/IDL_ok_npdataimport_Individual_Identifiers.log for the data import statistics.
2. Each process in the configuration starts to execute independently in the order it appears in the ONE
Point console application.
3. Check the log files for each process in the log directory <NP_ROOT>/np/log to check the statistics.
4. The Automatic Onekey Data Processing cycle will stop if any process fails during execution.
Resolve the issue(s) and restart the cycle again.
9.10 Operation Instructions to Load OneKey F-II Data into Nucleus 360
9.10.1 Overview
OneKey F-II data is the same data set from OneKey but in a different interface. Refer to the DID for
OneKey F-II interface details.
Following are the major changes in terms of processing and storing data into Nucleus 360 between
OneKey F-11 (section 9.9) and OneKey F-II.
1. It has more normalized data.
2. Data import mappings are configured only one for Org and one for Prof.
3. Data is received and loaded in OneKey main object mode.
A small ONE Point configuration is provided just for loading OneKey F-II data. Once data is loaded,
remaining post data import steps should be run as configured and as required by the client.
Following are the important data load steps for OneKey F-II.
1. Open ONE Point console using rt_config_nuc_okc.xml config file for OneKey F-II data load.
2. Open a command prompt and change directory to <NP_ROOT>\np\bin folder.
CMD> cd <NP_ROOT>\np\bin
3. Execute the following command from there.
CMD> ..\..\np_env.bat
4. Launch ONE Point console using the rt_config_nuc_okc.xml file as shown.
CMD> nponepoint.exe rt_config_nuc_okc.xml
5. Following is a screenshot of the ONE Point console. Run the processes one by one to load OneKey
F-II data to Nucleus 360.
A Perl process invokes the Oracle SQL Loader utility to load OneKey data and codes from flat files
received from OneKey source into temporary staging tables. Following is the list of staging tables
populated by this process. Copy the files to be loaded by unpacking to the above shown Data Files
Directory. Make sure connection string is correct and click OK.
Staging Table Name Comments
STG_OKC_RELATION OneKey F-II Relation data from different countries.
STG_OKC_WORKPLACE OneKey F-II Workplace data from different countries.
A PL/SQL process loads OneKey data from the OneKey codes staging table to the appropriate BLT
table. The following shows the association between the OneKey staging table and the OneKey BLT
table.
Staging Table Name BLT Table Name
STG_OK_RELATION BLT_OK_RELATION
STG_OK_WORKPLACE BLT_OK_WORKPLACE
STG_OK_INDIVIDUAL BLT_OK_INDIVIDUAL
STG_OK_ACTIVITY BLT_OK_ACTIVITY
STG_OK_ADDRESS BLT_OK_ADDRESS.
STG_OK_WORKPLACE_ADDRESS_REL BLT_OK_WORKPLACE_ADDRESS_REL
STG_OKC_ACTIVITY_ROLE BLT_OKC_ACTIVITY_ROLE
STG_OKC_BUSINESS_HOURS BLT_OKC_BUSINESS_HOURS
STG_OKC_COMPLIANCE BLT_OKC_COMPLIANCE
STG_OKC_EXTERNAL_ID BLT_OKC_EXTERNAL_ID
STG_OKC_INDIVIDUAL_EDUCATION BLT_OKC_INDIVIDUAL_EDUCATION
STG_OKC_MERGE BLT_OKC_MERGE
STG_OKC_PHONE_MEDIA BLT_OKC_PHONE_MEDIA
STG_OKC_QUALIFYING_DATA BLT_OKC_QUALIFYING_DATA
Populate code verification table PL/SQL process that evaluates the OneKey codes in the
OneKey code BLT table and populates the qualified codes into the
EXTERNAL_CODES_VERIFICATION table with an APPROVAL_STATUS_CD as Auto Approved
(506905).
OneKey F-II data comes in main object mode to Nucleus. This means only data that got changes will
come to Nucleus; e.g., if Specialty is modified, then only Specialty data for the Individual or
Workplace will be received from OneKey in Qualifying Data file. But Nucleus required a top level
customer, either Prof or Org, to load any data to Nucleus. Hence this PL/SQL method is created to
create a complete view of the profile by putting the Individual ID or Workplace ID in the
STG_OKC_INDIVIDUAL or STG_OKC_WORKPLACE table. Only top level entity is selected. Data
Import is now supported to load partial transactional data. For every entity there is an operation type
and the data import reacts to operation type for each entity.
Populate_okc_xml_staging_table
OneKey F-II data is not loaded using flat file transactional mode as in F-11. In F-II, data is now
converted to xml structures. Each profile Prof or Org is converted to a XML message in this step. The
xml message has each sub-entity like address or specialty as nodes. When this step is run, it
converts the entire staging OneKey F-II data to XML structures and stores in the following tables.
STG_OKC_PROF_XML_INPUT
STG_OKC_ORG_XML_INPUT
These tables have a column ROW_ID that contains Workplace ID or Individual ID. And for that ID
there is another column called XML_DATA that has the actual xml message for the profile.
This step splits the input profiles to groups for parallel processing. In the above mentioned staging
tables STG_OKC_PROF_XML_INPUT and STG_OKC_ORG_XML_INPUT, there is a column called
rank. By default the rank is a group ID and it is split into 20 groups each for prof and Org. It can be
configured to split into as many groups as required. Depending on the number of groups, those many
data import copies will be invoked in parallel for better performance.
This step is for loading rejected data in above step. Unlike F-11 data load, in F-II rejected profiles data
is not written to bad files; rather it is written to a table called CUS_EXCEPTION_PROFILE. When this
step is run after fixing any issue or code failure, data is read from STG_OKC_ORG_XML_INPUT
table for only those profiles written to the exception table. After successful processing, it removes the
entry from the exception table.
This step is for loading rejected data in the above step. Unlike F-11 data load, in F-II rejected profiles
data is not written to bad files; rather it is written to a table called CUS_EXCEPTION_PROFILE.
When this step is run after fixing any issue or code failure, data is read from
STG_OKC_ORG_XML_INPUT table for only those profiles written to the exception table. After
successful processing, it removes the entry from the exception table.
9.13 Troubleshooting
This section provides information about error messages.
9.13.1 Check paths in ace.cfg and dir expiration dates (or) Check paths in
configuration file and dir expiration dates
Failure Description
Postalsoft directory files have expired. An error message similar to the one given here is an indication of
Postalsoft directory expiration.
The specified ZIP+4 directory has expired. Exception: ACE_OPEN failed, check paths in ace.cfg and dir
expiration dates. Press return to continue.
Or
Open (Engine) failed, check paths in configuration file and dir expiration dates. ZIP+4 directory expires in
n days.
IMS Health Operations Guide Page 109 of 364
Rev. 1 October 1, 2015 371339087.doc
2015 IMS Health, Proprietary and Confidential Information
Nucleus 360 Version 5.2.0
Operation Instructions
Failure condition
This error occurs when the Postalsoft directory files are not current and are expired.
Solution
Install the current directory updates for Postalsoft received from Business Objects. Refer to section 9.5.2
for details.
9.13.4 Delta Exception: Phase Init(0), Code: delta engine initialization error(10)
Description: errno(2) LFileResultSet open error in <new_file.txt>
Failure Description
Delta Exception: Phase Init(0), Code: delta engine initialization error(10) Description: errno(2)
LFileResultSet open error in <new_file.txt>
Failure condition
This error occurs when either new or old data file does not exist in the path specified in option n for new
file and o for old file.
Solution
Make sure the old and new file exists in the specified path.
9.13.7 FAIL: transaction rejected and sent to bad files. See log.
Failure Description
FAIL: transaction rejected and sent to bad files. See log.
Failure condition
When the npdataimport process rejects a transaction, it is saved in a bad file. The bad file name
extension is .sort file. The process rejects transactions based on its configuration. Any of the following
situations could cause a transaction to be rejected. Though the list of reasons is not limited to the ones
provided here, these are the most common reasons.
A Delete transaction on an already inactive profile.
An Update transaction on a non-existing profile.
An Insert transaction on an existing active profile.
A transaction has a blank address, and Nucleus 360 is configured to reject empty addresses.
A transaction includes an invalid Physician Specialty that is not configured in the Nucleus 360
database.
9.13.8 No item code is found for External Item Code external item
Failure Description
FAIL: transaction rejected and sent to bad files. See log.
Delta Exception: Phase Delta(3), Code: transaction manager processing error(13) Description: Code: 0|
Desc: No item code is found for External Item Code external item in the Header Code code_header_id
Failure condition
This error occurs when some external codes in the file for Degree, Prefix, and Suffix, etc. are not mapped
to Nucleus 360 internal codes.
Solution
Add an external code for that reference using Code Table Maintenance (refer to the Code Table
Maintenance Users Guide for more information). Reload the bad file after successfully creating the
external code.
If only an old bad file exists, create a dummy empty file representing the new file. If only a new bad
file exists, create a dummy empty file representing the old file. The last modified date or timestamp for
the new bad file should be more recent than for the old bad file. If both files exist, use the options
listed following. Note that an old bad file and a new bad file from a run of Data Import may get
created for different profiles; in this case it is important to resolve errors related to both the profiles
before attempting a run of the bad files, or bad file(s) may be created again.
Perform a dry run and analyze results. Substitute <mapping_id>, <old_bad_file>, <new_bad_file> and
<log_file_name> with actual values before executing.
$ cd %DRTE_NP_BIN%
$ npdataimport m<mapping_id> o<old_bad_file> -n<new_bad_file>
-dold1.sort wnew1.sort l%DRTE_NP_LOGS%\<log_file_name> N
Load the data and analyze results.
$ cd %DRTE_NP_BIN%
$ npdataimport -m<mapping_id> o<old_bad_file> -n<new_bad_file>
-dold1.sort wnew1.sort l%DRTE_NP_LOGS%\<log_file_name>
Read
file name Name of the new file specified with n option
record # Record number in which the length was improper.
Failure condition
The Import process validates the non-XML, fixed length input files before applying them to the database.
It calculates the record length from the individual field length. If the record length is N, then the N+1
position should contain an EOL marker, if EOL marker present, but not at N+1 position, then it will
produce this error message.
Solution
Check that record for missing or extra length provided at some position. Correct the length of the record
to the expected record length and re- run the process.
9.13.12 Key field field name is empty in file name row record # field: field
position
Failure Description
validation error: Key field field name is empty in file name row record # field: field position
field name Name of the field that has violated this rule.
file name Name of the new file specified with n option
record # Record number in which the empty key was found.
field position Position of the field in the record.
Failure condition
This error occurs when a field is defined as a key field in the input file and the value for that field is empty.
Solution
Key fields cannot be empty in the file. Provide a valid value for the key field in the file and import the
records.
9.13.13 Not null field field name is empty in file name row record # field:
field position
Failure Description
validation error: Not null field field name is empty in file name row record # field: field position
field name Name of the field that has violated this rule.
file name Name of the new file specified with n option
record # Record number in which the empty key was found.
field position Position of the field in the record.
Failure condition
This error occurs when a field is defined as a mandatory field in the input file and the value for that field is
empty.
Solution
Not null fields cannot be empty in the file. Provide values for all the required fields in the file and import
the records.
.
.
</SRC_TYPE>
<COMMON_MESSAGE_XSLT_FILE>main_transform.xsl</COMMON_MESSAGE_XSLT_FILE>
</CONFIGURATION_LIST>
Solution
In the dbcfg.txt file in the %DRTE_NP_BIN% folder, change the MAXCON value to one greater than the
number of CONFIGURATION blocks in the config.xml file.
9.13.18 An Unhandled Exception occured. Refer to the dump file for details.
npdataimport.exe_YYYYMMDD_HHMMSS.dmp.
Failure Description
When the Data Import application crashes due to an Unknown Error, it creates one or more dump files in
the application directory (BIN). The dump file is in binary format and it has details regarding the crash.
Solution
The files should be sent to the IMS Health Support team via email or ftp so that they can determine the
exact location of the source at which the crash occurred. The dump file name will be of the format
npdataimport.exe_YYYYMMDD_HHMMSS.dmp and is created in the bin directory of the server layer
installation.
<TransactionSetProperties
null_representation="implicit"
transaction_scope="entity">
<DateTimeFormats dateTimeFormat="YYYY MM DD HH24:MI:SS"/>
<Source>SFA</Source>
<Message_Version>1.0.0</Message_Version>
<COM_Version>1.0.0</COM_Version>
<Application_Version>1.7.1</Application_Version>
</TransactionSetProperties>
<Exception>
<Type>IMPORT</Type>
<DateTime>08/02/2008 03:35:23 PM</DateTime>
<Description>Code: 0|Desc: No item code is found for External Item Code Work in the Header
Code 1109|ProfileID: 10783501|ProfileType: 112451|ProfileIDVal: |ProfileIDType: 112451|
Datasource: 200078|DBErrorCode: 0|DBErrorMsg:</Description>
</Exception>
<xt:Professional
objectId="1299"
operation="REFRESH"
type_name="Professional"
xmlns:xt="http://schemas.dendrite.com/IS/1.0.0/Professional">
<Customer_ID>TEST_CUSTOMER_ID</Customer_ID>
<Activity_Status>CURR</Activity_Status>
<Professional_Type>Pharmacist</Professional_Type>
<First_Name>John</First_Name>
<Middle_Name></Middle_Name>
<Last_Name>Smith</Last_Name>
<Data_Source>SFA</Data_Source>
<Address_List type_name="Address_List">
<Address objectId="1300" operation="REFRESH" type_name="Address">
<Address_ID>TEST_ADDRESS_ID</Address_ID>
<Address_Type>Work</Address_Type>
<Address_Line1>1405 Route 206 South</Address_Line1>
<Address_Line2></Address_Line2>
<City>Bedminster</City>
<Region/>
<Postal_Area>07921</Postal_Area>
<Activity_Status>CURR</Activity_Status>
</Address>
</Address_List>
</xt:Professional>
</ts:TransactionSet>
2. Select a value from the drop down for Mapping. This value determines the mapping to be used by
the process. A default mapping name is already selected.
3. Enter the exception file name for Exception File Name. A default file name is already provided
4. Select a value from the drop down for Dry Run. The default value is already selected.
5. Select a value from the drop down for Skip Validation. The default value is already selected.
6. Enter the log file name for Log File. Default log file name is provided.
7. Enter the bad file name for Bad File. Default bad file name is provided.
8. Click OK button to start the Import process.
9. Check the Application log file, <NP_ROOT>/np/log/ok_npdataimport_Workplace_exception.log
for the data import statistics.
5. Select a value from the drop down for Skip Validation. The default value is already selected.
6. Enter the log file name for Log File. Default log file name is provided.
7. Enter the bad file name for Bad File. Default bad file name is provided.
8. Click OK button to start the Import process.
9. Check the Application log file,
<NP_ROOT>/np/log/ok_npdataimport_Workplace_Identifiers_exception.log for the data import
statistics.
10.1 Overview
The DQT process populates the de-normalized tables for Professionals and Organizations. Population of
the Professional table (DQT_PROFESSIONAL_PROFILE) or the Organization table
(DQT_ORGANIZATION_PROFILE) is based on the CUS_PRIMARY_ID_XREF.PROFILE_TYPE_CD.
The process is implemented in an Oracle PLSQL package PKG_Denorm_Populate, which exposes the
following procedure/functions to execute it in various modes.
1. Main: Procedure to launch the DQT process in Incremental or Refresh mode.
2. ProcessImportedProfiles: Function to process profiles that are in STG_IMPORTED_PROFILE table
3. ProcessSingleProfile: Procedure to process a single profile.
PKG_Denorm_Populate.ProcessImportedProfiles(
tUserId IN FND_SYS_USER.SYS_USER_ID%TYPE,
nCacheSize IN NUMBER := 300000,
nCommitInterval IN NUMBER := 1000
);
The previous query returns the status of the DQT process irrespective of the nProcessMode value. If the
process has no error and if the query returns a value of 0, then the DQT process is done.
10.8 Troubleshooting
ORA-06512: at line 1
or
BEGIN pkg_denorm_populate.main; END;
*
ERROR at line 1:
ORA-20501: ORA-06508: PL/SQL: COULD NOT FIND PROGRAM UNIT BEING CALLED occurred
in main
ORA-06512: at "NP322_TRAINING.PKG_DENORM_POPULATE", line 1823
ORA-06512: at line 1
Failure condition
This condition would occur when one of the packages is invalid. The name of the invalid package can be
seen in the error message.
Solution
Recompile the specified package.
10.8.4 When the error message range is not between ORA-20000 and ORA-20999
=>
Failure Description
Get_Comm Error
ProfileId: 1234
AddressId: 3456
SQLCODE: ORA-02291
SQLERRM: integrity constraint (FK_PROFESSIONAL_PROFILE_PROF) violated - parent key not found
11.1 Overview
Each significant component of the customer record is evaluated against a set of pre-configured (and
extensible) data quality rules. Depending on the result of the rule evaluation, a data quality value is
assigned to each record. Currently the following data record types can be evaluated and/or processed:
Professional names (CUS_PROFESSIONAL_PROFILE)
Organization names (CUS_ORGANIZATION_PROFILE)
Address (CUS_ADDRESS)
Secondary identifiers (CUS_SECONDARY_ID_XREF)
The result of the Data Quality indicators can be used downstream to affect the quality of other
processes, such as Hash, RDM, Match or custom exports. Processes such as the Reports and DQT have
access to the Data Quality indicators.
11.8 Troubleshooting
Solution
Pass a valid number as argument. Refer to section 11.2 for process parameters. When running through
the ONE Point Console application, choose the option values from the drop down list before running the
process.
Solution
Provide a valid process mode as an argument when running directly from a command prompt. Refer to
section 11.2 for process parameters.
12.1 Overview
The Hash process creates a cross-reference table with string-based lookup keys for each profile. Lookup
keys are created by combining various sub-strings of essential information (name and address
components) of each profile. These keys serve as a compressed sub-string representation of the profile.
This causes the Hash process to create lookup keys of data based on intentionally vague criteria, thereby
allowing the Match process to process not only exact duplicates but also profiles that might have
approximately matching values (due to misspellings, truncation, and so on).
The keys created are used in searching for candidate profiles within the Match pre-process (profiles
having the same lookup keys become candidate to each other). This process enables the Match process
to work more effectively and efficiently in terms of retrieving the candidate records quickly (due to a key
lookup as opposed to a string-based search of the database) and selecting all of the probable candidates.
The generated Hash key is stored in the NUC_PROFILE_LKUP table.
3. NP_CFG_HASH_FIELD: This table defines the database columns that are used by the Hash
process to generate lookup keys. Fields that are defined as FIELD_ORDER 0 are not used in
the lookup key generation but are used by the Hash process internally. Fields that are defined
with FIELD_ORDER greater than zero are passed to the Pattern Set in the sequence mentioned
in the FIELD_ORDER. The Pattern Set uses these fields to generate lookup keys.
12.4.3 Output
Application status and process metrics are stored in a log file. The log file name is in the format
nphash_<timestamp>.log. Generated Hash keys are stored in the NUC_PROFILE_LKUP table.
Note: Hash keys will not be generated for a profile if any of the following conditions are true:
The Data Quality process was not run successfully.
The profile is inactive.
No address records are found in CUS_ADDRESS table.
All addresses are inactive in CUS_ADDRESS table.
An active address is found but the address is not standardized. (In other words, an address record is
not found in CUS_ADDRESS_COMPONENT table.)
The profile type is Professional, and professional details are not found in the
CUS_PROFESSIONAL_PROFILE table.
The profile type is Organization, and organization details are not found in the
CUS_ORGANIZATION_PROFILE table.
12.5.7 To Refresh Hash Keys for a Range of Profiles Without Submitting Them for
Matching
Execute the following command to generate lookup keys for a range of profiles that are currently locked in
the STG_IMPORTED_PROFILE_table. In this mode the profiles will not be copied to the
NP_MATCH_PENDING table.
$ cd %DRTE_NP_BIN%
$ nphash.exe a -l%DRTE_NP_LOGS%\nphash.log x<Start Profile ID> -y<End
Profile ID>
Note: The Copy Imported Profiles to Match Pending option should be set to Yes if the Profile
Selection Criteria is Staging Imported Profiles Only. This option is not applicable for other profile
selection criteria.
Refresh mode
1. If the process is running in Refresh mode, execute the following query to get the count of all active
profiles considered for refresh.
SQL> select count (profile_id) from cus_primary_id_xref where delete_ind = 0;
2. Then check the progress of the process by executing the following query to get the number of profiles
for which the Hash keys have currently been regenerated. This count will increase as the process
progresses.
SQL> select count (distinct profile_id) from nuc_profile_lkup;
12.9 Troubleshooting
12.9.1.1 ORA-01555 snapshot too old: rollback segment number string with name
string too small
Failure Condition
This could happen when multiple processes are running concurrently on the same set of data. Records
that are read by one process are overwritten by another process.
Solution
Avoid running concurrent processes that work on the same set of data.
12.9.2 The Logging option is required. A log file name must be passed.
Failure Condition
The logging option l is a mandatory option which must be passed while running the Hash process.
Solution
Pass the logging option l and log file name to run the process. Refer to section 12.2 for command line
options to run the process.
13.1 Overview
The Match process matches new or updated incoming profiles that are submitted for matching via the
NP_MATCH_PENDING table to other new incoming profiles or to existing customer records within the
Nucleus 360 database. This process executes in two stepsPre-process and Match.
13.1.1 Pre-process
The Match process creates Input-Target profile pairs based on the data sourcetodata source matching
configuration and the generated Hash keys. The Match Pre-process creates pairs of profiles based on the
equality of their Hash keys (profiles having the same lookup keys become candidate to each other).
Optionally, the Pre-process can also create Input-Target pairs based on the Peer profile selection (profiles
in the same NUCLEUS ID as that of the profile being matched are called Peer Profiles) configuration and
the Hard link configuration.
The Pre-process removes the profiles that exist in the Resolution Pending Queue at the time it is run.
These are resubmitted for matching again, since these profiles can possibly obtain a new match score
and new targets.
So in the Pre-process mode the Input-Target pairs are createdno matching is performed. While deleting
the profiles from the Resolution Pending Queue, the Pre-process ensures data integrity by handling the
following conditions:
All records for the profile being matched (occurring as Input) are deleted from the Resolution Pending
Queue where the profile is the Input.
The Input-Target pair for a profile being matched (where the profile being re-matched occurs as
Target) to another profile (occurring as Input) is deleted from the Resolution Pending Queue given
that the Input has at least one other Input-Target pair in the table (apart from the one being deleted).
The Input-Target pair for a profile being matched (where the profile being re-matched occurs as
Target) to another profile (occurring as Input) is updated as having No Candidates in the Resolution
Pending Queue where the Input has no more Input-Target pairs in the table, apart from the one being
updated. The updated Input Profile is then moved to the Match Output queue (NP_MATCH_OUTPUT
table) for ID Assign for further processing.
The Input-Target pairs derived based on the Hash keys (503601), Peer/Existing profiles (503600), and
Custom Candidates (503602 based on pairs created in the NP_MATCH_CUSTOM_CANDIDATE table),
as configured, are saved to the NP_MATCH_CANDIDATES table based on the configuration in the
NP_CFG_NMU_CANDIDATE_SELECTION table for the profiles data source.
The Input-Target pairs derived based on the hard link configuration (as set up in the
NP_CFG_NMU_ID_LINK table) are saved to the NP_MATCH_HARDLINK_CANDIDATES table. Note that
the hard-linked profiles should also have common lookup keys in NUC_PROFILE_LKUP table.
The candidate tables are completely deleted before new values are inserted. For performance
considerations, the Match Pre-process can be configured to execute in Cache or Non-Cache mode. The
data selection method in each mode of operation is explained in the following table.
13.1.2 Match
The Match process calculates and produces the similarity scores between the Input-Target profiles from
the candidates tables (created by the Match Pre-process), based on the system configuration. The
system can be configured to match on standardized IDs.
SYNOPSIS
run_match.exe -c=3
run_match.exe -c=3 -i
DESCRIPTION
Script to run the Match process.
OPTIONS
--copies, -c
Number of Match copies to be initiated. Default is 3 copies.
ex: --copies=3
--rulefile, -r
Name of the Rule file used by the Match Process.
Default=nucmatch.rul. Ex: -r=nucmatch..rul
--run_id_group_size, -n
Usage. Number of input profiles to be grouped in one run ID in
NP_MATCH_PENDING: Default=2000.2
--help, -h, -?
Usage. Displays the usage of this script.
The script performs the following operations:
Ensures that the Data Import process completed successfully by checking the
NP_DELTA_RECOVER table.
Executes NPMatch pre-process.
Executes NPMatch in normal mode. The script will spawn a specified number of copies for concurrent
processing. The script will wait until each copy has completed before proceeding. The Match process
is tuned to run in a special mode to handle a high volume of data by specifying the -idl option. Use
this option during initial data load. An exit code of zero is returned if the run script completes
successfully; otherwise a non-zero value is returned.
2
This value should be set based upon system capacity. The higher the capacity, the larger the group size
should be.
IMS Health Operations Guide Page 153 of 364
Rev. 1 October 1, 2015 371339087.doc
2015 IMS Health, Proprietary and Confidential Information
Nucleus 360 Version 5.2.0
Operation Instructions
Option Description
-? Detailed Help
-l<filename> Specifies log file name (required)
-r<filename> Specifies rule file name (required)
-c<file> Database configuration file (dbcfg.txt is the default file)
-d<debug> Logging level:
=1 Errors only
=2 Terse-default
=3 Verbose
=4 Detail. Example usage : -d4
-z Pre-process
-x Run Pre-process in non-cache mode
-v Version Information
-V Extended Version Information (internal)
- Number of input profiles to be grouped in one run ID in
n<num_input_records_per_run_id NP_MATCH_PENDING. Min=1, Max=20000, Default=1000
> (optional). This option can only be used with -z.
-s<lowest score> Match will only produce records with scores greater than this lowest
score. Default = 40
-q Loads all the Plug-Ins (DLLs) from the configuration table and
displays summary information of supported algorithms. Only details
for active Plug-Ins are reported. The -c option is the only valid
option for -q.
Note: This option is only available in the Win32 version of this
process
-t Write out match trace information. This information is written to the
NP_MATCH_COMPARE_TRACE table.
-m Complete match (pre-match and match). Use this option to execute
the pre-process and the Match process in a single step.
Executing the Match utility is a two-step process pre-process and execution. Use the m option to run
the pre-process and the Match process in a single step. Optionally the pre-process can be executed
separately using the z option in troubleshooting mode or in a controlled environment.
Launch the Match process using the ONE Point configuration Manual Process Management. Select
Match from the list of processes. Modify the options if required by selecting from the drop down box or by
entering the required value.
Following are descriptions of log messages for the Match process (these are available in the match.log
file):
Input Profiles processed denotes the number of input profiles processed by the match process as
per the NP_MATCH_PENDING table.
Professional Profiles processed denotes the number of profiles of type Professional that were
processed.
Invalid Professional Input Records not processed denotes the number of Professional input
profile records that are not processed, either because they did not satisfy the conditions configured in
the NP_CFG_NMU_DATA_GRP_VALIDATION table or because their addresses were inactivated.
This second scenario is likely to happen if a profile is sent for match and its addresses are inactivated
before the process is executed.
Professional Candidate Records not processed represents the number of professional target
profile records that are not processed, either because they did not satisfy the conditions configured in
the NP_CFG_NMU_DATA_GRP_VALIDATION table or because their addresses were inactivated.
This second scenario is likely to happen if a profile is sent for match and its addresses are inactivated
before the process is executed.
Organization Profiles processed denotes the number of profiles of type Organization that were
processed.
Invalid Organization Input Records not processed denotes the number of input organization
profile records that are not processed, either because they did not satisfy the conditions configured in
the NP_CFG_NMU_DATA_GRP_VALIDATION table or because their addresses were inactivated.
This second scenario is likely to happen if a profile is sent for match and its addresses are
inactivated before the process is executed.
Invalid Organization Candidate Records not processed denotes the number of organization target
profiles that are not processed, either because they did not satisfy the conditions configured in the
NP_CFG_NMU_DATA_GRP_VALIDATION table or their address was inactivated. This last scenario
is likely to happen if a profile is sent for match and its addresses are inactivated before the process is
executed.
Records Sent for Manual Resolution denotes the number of records that are sent to the
NUC_RESOLUTION_PENDING table to be resolved from the GUI using the ID Assign process. Note
that these profiles are put into the NP_MATCH_OUPUT table initially and then moved directly to the
NUC_RESOLUTION_PENDING table by ID Assign.
Candidate Profiles processed denotes the number of candidate profiles processed by Match.
Input Profiles Hard linked denotes the number of input profiles in the
NP_MATCH_HARDLINK_CANDIDATES table that are processed.
Total Hardlink Matches denotes the number of input profiles that are matched based on the Hard
link rules.
Input Profiles Without Any Valid Candidates denotes the number of input profiles processed that
did not had a valid target profile pair identified.
While the Match process is running, the following queries will provide details about its progress.
Execute the following query to get the number of Run IDs pending to get processed by Match
process.
SQL> Select count(0) From NP_MATCH_RUN where STATUS_CD = 503500;
Execute the following query to get the number of Run IDs that Match is currently processing.
SQL> select count(0)From NP_MATCH_RUN where STATUS_CD = 503501;
Execute the following query to get the number of Run IDs that Match has processed.
SQL> select count(0)From NP_MATCH_RUN where STATUS_CD = 503502;
Once a copy of the Match utility is finished processing, it removes the RUN IDs entries from the
NP_MATCH_RUN table and deletes the corresponding profiles from the NP_MATCH_PENDING table.
Execute the following query to get the number of distinct profiles left to be processed by ID Assign.
Profiles with send_to_client_ind = 1 will automatically be moved to the
NUC_RESOLUTION_PENDING table by ID Assign.
SQL> select count(distinct input_profile_id) from np_match_output where
send_to_client_ind = 0;
13.7 Troubleshooting
13.7.2 Pre-process mode does not support the following options [-r,-s,-q,-t, -m]
Failure Description
Pre-process mode does not support the following options [-r,-s,-q,-t, -m]
Failure Condition
Match Pre-process and Match are run through the same executable but with different arguments. The
argument -z that runs the Match Pre-process cannot run with the the options -r, -s, -q, -t and -m.
So this error occurs when the Match Pre-process is run with the -z option along with one or more of the
above mentioned options.
Solution
Provide valid arguments or combinations of arguments while running the Match process. Refer to section
13.2.1.3 for the correct command line parameters.
14.1 Overview
At this point, data has been imported to the Nucleus 360 database, and new and updated profiles have
been matched against all the Nucleus 360 profile data. The ID Assign process takes the match scores
and attempts to assign NUCLEUS IDs for the submitted profiles. ID Assign sets aside the profiles that it
cannot resolve automatically. These unresolved profiles must then be manually resolved using the
Customer Resolution application. Like Match, ID Assign has a pre-process mode and a main mode.
ID Assign Pre-process (Group Match Output): The ID Assign pre-process mode assigns Group IDs to
profiles in the NP_MATCH_OUTPUT table. The pre-process is implemented in a PL/SQL package
NPProfileGrouping. Each Group ID represents a logical collection of unresolved pairs in the
NP_MATCH_OUTPUT table. The logical relationship of all the input/target profiles is based on a transitive
relationship among them. For example:
The following unresolved pairs exist in the NUC_RESOLUTION_PENDING table:
Input Profile Target Profile
A B
B C
D E
D G
Due to the transitive relationship, the pairs will be grouped as follows:
Pair Group Description
A,B 1 Initial Group
B,C 1 Assigned Group ID 1, as B already exists in a previously assigned Group 1.
D,E 2 D and E do not belong to any group, so a new Group ID is generated.
D,G 2 Assigned Group ID 2, as D already exists in a previously assigned Group 2.
Records from the unresolved queue (NUC_RESOLUTION_PENDING table) are moved to the
NP_MATCH_OUTPUT table if the unresolved input profile is a target in the NP_MATCH_OUTPUT table.
Records that are sent to the unresolved queue by the Reverse Data Monitor process (RDM) will not be
moved to the NP_MATCH_OUTPUT table. The pre-process also does a cleanup of inactive input profiles
from the NP_MATCH_OUTPUT table.
Note: The ID Assign pre-process must be completed successfully before executing the ID Assign
process. It is not an optional step.
To process all records from NP_MATCH_OUTPUT table without running the ID Assign preprocess.
Note: The pre-process should have been completed successfully before executing ID Assign. The -s
option should be used only when the ID Assign process fails after successfully completing the pre-
process.
$ cd %DRTE_NP_BIN%
$ run_idassign.exe s
14.8.1.1.1 ID Assign Pre-process Sample Log Message (Group Match Output Process)
NRP groups triggered for re-evaluation: This count denotes the total groups that are moved from
the NUC_RESOLUTION_PENDING table to the NP_MATCH_OUTPUT table for re-processing.
Total Group IDs synchronized: This count denotes the total groups that are synchronized to have
the same NRP_GROUP_ID between the NP_MATCH_OUTPUT table and the
NUC_RESOLUTION_PENDING table. If a target profile in NP_MATCH_OUTPUT table is found in the
NUC_RESOLUTION_PENDING table as an input or a target, then the NRP_GROUP_ID of the
NUC_RESOLUTION_PENDING table is updated with the NRP_GROUP_ID of the
NP_MATCH_OUTPUT table.
Total NMO groups moved to NRP: This count denotes the total groups that are moved from the
NP_MATCH_OUTPUT table to the NUC_RESOLUTION_PENDING table for manual resolution.
These groups are triggered directly by the Match process for manual resolution.
Customer resolution algorithm: Algorithm that is used to resolve customers in this run.
Nucleus IDs created: Count of new NUCLEUS IDs that are created in this run.
Profiles resolved automatically: Count of profiles that are automatically resolved in this run.
Unique profiles sent for manual resolution: Count of unique profiles that are sent for manual
resolution in this run.
Profiles sent to GUI queue: Count of unique profiles that are written to the
NUC_RESOLUTION_GUI_QUEUE table.
Profiles resolved with no targets: Count of unique input profiles with no targets.
Input-Target pair resolved with score below low threshold: Count of input-target profile pairs
having match score less than the low threshold defined in the NUC_CFG_ASSIGN table.
Groups processed: Count of unique groups processed when the process is run in group mode.
Overridden match scores due to link rule: Count of records in which the match score is overridden
due to a link rule created in the CUS_MANUAL_RESOLUTION_HISTORY table
(RESOLUTION_TYPE_CD = 505100).
Overridden match scores due to threshold link rule: Count of records in which the match score is
overridden due to a condition link rule created in the CUS_MANUAL_RESOLUTION_HISTORY table
(RESOLUTION_TYPE_CD = 505101).
Overridden match scores due to delink rule: Count of records in which the match score is
overridden due to a de-link rule created in the CUS_MANUAL_RESOLUTION_HISTORY table
(RESOLUTION_TYPE_CD = 505102).
Link rules ignored due to low threshold score: Count of records in which the match score is not
overridden even though a condition link rule existed in the CUS_MANUAL_RESOLUTION_HISTORY
table (RESOLUTION_TYPE_CD 505101), but the match score was less than the low threshold as
configured in NUC_CFG_ASSIGN table.
Overridden match scores due to hardlink merge rule: Count of records in which the match score
is overridden due to the hard-link merge setting in the NP_CFG_NMU_ID_LINK table (MERGE_IND =
1).
Profiles auto-assigned due to merge hardlink rule: Count of profiles that were not assigned a
NUCLEUS_ID but were assigned the NUCLEUS_ID of a profile in this run due to the hard-link merge
rule defined in the NP_CFG_NMU_ID_LINK table (MERGE_IND = 1).
Profiles auto-moved due to merge hardlink rule: Count of profiles that were already assigned a
NUCLEUS_ID but were moved to a different NUCLEUS_ID of a profile in this run due to the hard-link
merge rule defined in NP_CFG_NMU_ID_LINK table (MERGE_IND = 1).
Merge hardlinks ignored due to low overridden score: Count of records in which the merge hard-
link setting is ignored due to a de-link rule created in the CUS_MANUAL_RESOLUTION_HISTORY
table (RESOLUTION_TYPE_CD 505102).
Profiles auto-assigned due to multiple customer resolution: Count of profiles that were not
assigned a NUCLEUS_ID but were assigned a NUCLEUS ID in this run due to a multiple customer
merge.
Profiles auto-moved due to multiple customer resolution: Count of profiles that were already
assigned a NUCLEUS_ID but were moved to a different NUCLEUS ID in this run due to multiple
customer merge.
14.9 Troubleshooting
14.9.2 The Logging option is required. A proper file name must be passed.
Failure Description
Exception Error Code = 0
Exception Error Type = 4
Exception Short Description = The Logging option is required. A proper file name must be passed.
Exception Description = The Logging option is required. A proper file name must be passed.
Failure Condition
The logging option l is a mandatory option which must be passed while running the process.
Solution
Pass the logging option l and log file name to run the process. Refer to section 14.4.2 for the command
line parameter used by ID Assign to run the process.
14.9.3 The Mode option is required. A proper mode name must be passed.
Failure Description
Exception Error Code = 0
Exception Error Type = 4
Exception Short Description = The Mode option is required. A proper mode name must be passed.
Exception Description = The Mode option is required. A proper mode name must be passed.
Failure Condition
The mode option m is a mandatory option which must be passed while running the process.
Solution
Pass the mode option m with valid mode to run the process. Refer to section 14.4.2 for the command
line parameter used by ID Assign to run the process.
14.13 EXT_MODIFIED_FLG
The following table lists all values and their meaning for the EXT_MODIFIED_FLG column in the
NUC_RESOLUTION_PENDING table.
Equivalent
EXCEPTION_CONDITION_CD
EXT_MODIFIED_FLG Description
from NP_MATCH_OUTPUT
table
0 504400 Regular name and address match
2 504401 Hard Link match
3 504402 No candidates found for the input profile
4 504403 Invalid input profile. One or more required
fields for matching are empty.
5 504404 Merge Hard Link match
6 N/A RDM record. Input and the target profile are
sent for manual resolution without
automatically splitting them to different
NUCLEUS ID.
7 N/A RDM record. Input and the target profile are
already split to different NUCLEUS IDs and
it is sent for manual confirmation.
15.1 Overview
This process inserts reverse links (opposite pairs of input-target profile IDs) in the
NUC_RESOLUTION_PENDING table. These are required for the Customer Resolution application. If the
NUC_RESOLUTION_PENDING table has profile A and B as unresolved, and profile A has a potential
match with B, then B is implicitly a potential match of A. Sometimes this second relationship, or reverse
link, is not present in the NUC_RESOLUTION_PENDING table because profile B may not have come for
matching at the same time as profile A. The absence of such relationships causes the Customer
Resolution application to produce an incomplete picture of the unresolved queue. To avoid such issues,
this process makes sure such relationships are created.
$cd %DRTE_NP_BIN%
$runstoredproc -s"pkg_db_maintenance.process_maintenance(16)"
15.5 Troubleshooting
15.5.1 When the system parameter 1309 with last run date has not been created
Failure Description
Start Time: 04:21:17 05/07/2015
Aborting, system parameter 1309 (last run dt) does not exist
Failure condition
This condition would occur when the system parameter 1309 has not been created in the system.
Solution
Contact your support team to report the problem.
16.1 Overview
Profiles that are resolved by the ID Assign process are assigned a NUCLEUS ID. The RDM process
(which can be run anytime after the ID Assign process is completed) detects profiles with conflicting data
conditions in a column (examples: Professional Suffix Mismatch, ME Number Mismatch, IMS Number
Mismatch, AOA Number Mismatch, Organization Type Mismatch, and Organization Subtype Mismatch)
for all profiles that belong to the same customer (NUCLEUS ID).The process is implemented in an Oracle
PL/SQL package NPReverseDataMonitor, which exposes a procedure ReverseCheck to execute it in
Refresh and Incremental modes.
If the process is run in refresh mode, all NUCLEUS_IDs that have active profiles with conflicting data
condition as the RDM configuration being checked will be selected for processing.
$runstoredproc -s"pkg_db_maintenance.process_maintenance(17)"
2. Profiles that are sent to the unresolved queue by the RDM process after assigning a new NUCLEUS
ID for the conflicting profile.
4. The following query can be used to compare the data of a profile that was assigned a NUCLEUS ID
by the RDM process to profiles in its previous (original) NUCLEUS_ID. This query is applicable only
for customer of type Professionals. Substitute the yellow highlighted ProfileID with the actual profile id
that was moved out by the RDM process.
Note: The Select clause can be modified to include the columns desired.
select distinct
old_dqt.profile_id old_peer_profile_id, new_dqt.profile_id profile_id,
old_dqt.std_suffix old_peer_suffix, new_dqt.std_suffix suffix,
old_dqt.me_num old_peer_me#, new_dqt.me_num me#,
old_dqt.ims_num old_peer_ims#, new_dqt.ims_num ims#,
old_dqt.aoa_num old_peer_aoa#, new_dqt.aoa_num aoa#,
cmrh.notes
from cus_customer new_cc,
cus_customer_history cch,
cus_customer old_cc,
cus_manual_resolution_history cmrh,
dqt_professional_profile old_dqt,
dqt_professional_profile new_dqt
where new_cc.profile_id = :ProfileID
and new_cc.crtn_id = 114
and new_cc.profile_id = cch.profile_id
and old_cc.nucleus_id = cch.old_nucleus_id
and old_dqt.profile_id = old_cc.profile_id
and new_dqt.profile_id = new_cc.profile_id
and cmrh.profile_id = new_cc.profile_id
and cmrh.crtn_id = 114
and cmrh.resolution_type_cd = 505102
and cmrh.nucleus_id = old_cc.nucleus_id;
5. The following query can be used to compare the data of a profile that was assigned a NUCLEUS ID
by the RDM process to profiles in its previous (original) NUCLEUS_ID. This query is applicable only
for customer of type Organizations. Substitute the yellow highlighted ProfileID with the actual profile id
that was moved out by the RDM process.
Note: The Select clause can be modified to include the columns desired.
select distinct
old_dqt.profile_id old_peer_profile_id, new_dqt.profile_id profile_id,
old_dqt.org_type_cd old_peer_org_type_cd, new_dqt.org_type_cd org_type_cd,
old_dqt.org_subtype_cd old_peer_org_subype_cd, new_dqt.org_subtype_cd org_subtype_cd,
cmrh.notes
from cus_customer new_cc,
cus_customer_history cch,
cus_customer old_cc,
cus_manual_resolution_history cmrh,
dqt_organization_profile old_dqt,
dqt_organization_profile new_dqt
where new_cc.profile_id = :ProfileID
and new_cc.crtn_id = 114
and new_cc.profile_id = cch.profile_id
and old_cc.nucleus_id = cch.old_nucleus_id
and old_dqt.profile_id = old_cc.profile_id
and new_dqt.profile_id = new_cc.profile_id
and cmrh.profile_id = new_cc.profile_id
and cmrh.crtn_id = 114
and cmrh.resolution_type_cd = 505102
and cmrh.nucleus_id = old_cc.nucleus_id;
17.1 Overview
The Standardizer is a standalone process that should be run exclusively in manual mode to re-
standardize the data that has already been processed in the Nucleus 360 system. This encompasses
Professional name, Organization name, Addresses, Secondary ID data, and Communication data. This
process is not embedded in any other application.
Note: This process should not be run when the Data Import process runs in batch (third party data source
import) or in real-time mode (SFA real-time XML message import).
Option Description
-? Help
-v Version Information
-V Extended Version Information
-c Database configuration file
-d Debug Level, 0 to 4. Default is 1 (Log Errors Only).
-P Standardize professional names
-O Standardize organization names
-A Standardize addresses
-C Standardize Communication
-S Standardize Secondary
-r Triggers re-matching if standardized values are different
-t Saves the results in temporary tables
-l Saves the results in a log file with the name indicated (you must also specify a name for the
log file)
-x Applies standardized professional data from the temporary table to
CUS_PROFESSIONAL_PROFILE table
-p Standardizes one profile (you must also include the Profile ID)
-q Query File
-n Dry Run. Do not commit transactions
-m Mapping ID
17.3.1 Output
Standardization metrics are logged in the log file named using the -l option. If the -t option is used, and
then the standardized output will be saved in the appropriate temporary tables.
17.6 Troubleshooting
Refer to Appendix B for any database-related error messages.
Refer to Appendix E for ACE/IACE errors.
Raw Address
Profile Id :10
Address Id :12
Address Line1 :2510 CALDWELL AVE S APT K2
Address Line2 :
Address Line3 :
City :BIRMINGHAM
State code :AL
Postal code :35205
Postal code ext:2520
Standardized Address
Address Line1 :2510 CALDWELL AVE S APT K2
Address Line2 :
Address Line3 :
City :BIRMINGHAM
Mailing City :BIRMINGHAM
State code :AL
Postal code :35205
Postal code ext:2520
Firm name :
ACE status code:S00000
ACE error code :
Restandardized Address
Address Line1 :2510 CALDWELL AVE S APT K2
Address Line2 :
Address Line3 :
City :BIRMINGHAM
Mailing City :BIRMINGHAM
State code :AL
Postal code :35205
Postal code ext:2520
Firm name :
ACE status code:S00000
4. Populates practitioner details that are to be sent to the verification vendor into the
STG_VERIFICATION_EXPORT table.
Creating the practitioner file for validation vendor
A file is created from practitioner details that are stored in the STG_VERFICATION_EXPORT table. Once
a profile is successfully exported to the validation vendor, it is moved to the table
CUS_VERIFICATION_PROFILE using the package NPPracVerStgMaint.
$ cd %DRTE_NP_BIN%
$ run_pracver -export -file=<file name>
$ cd %DRTE_NP_BIN%
$ run_pracver --stgexp-to-vfxn
$ cd %DRTE_NP_BIN%
$ run_pracver --export --skip-file-processing
$ cd %DRTE_NP_BIN%
$ run_pracver --create-file -file=<file>
$ cd %DRTE_NP_BIN%
$ run_pracver --import --file=file.txt
$ cd %DRTE_NP_BIN%
$ run_pracver --import --skip-file-processing
$ cd %DRTE_NP_BIN%
$ run_pracver --load-file -file=<file name>
1. After this process is completed, analyze the table to gather various statistics that are used by Oracle
to perform data fetching more efficiently. Execute the following command:
$ cd %DRTE_NP_BIN%
$ runstoredproc -spkg_db_maintenance.process_maintenance(38)
$ cd %DRTE_NP_BIN%
$ run_pracver --reverification
18.10 Troubleshooting
5. If the query returns a count other than 0, it means either of the following:
a. The process failed before generating the file, go to step 6
b. The process failed while generating the file, go to step 7
2. Re-run the complete Export cycle.
3. For scenario 5.b:
a. A partially generated file will be present in %DRTE_THIRD_PARTY%\np\out\validation
b. Re-run the complete Export cycle and merge the newly generated file with the one in the previous
step using the following commands
$ cd %DRTE_THIRD_PARTY%\np\out\validation
$ copy /B <7a>.out + <7b>.out <merged>.out
Note 1: Executing the above query will delete only the profiles that were failing to get imported into the
cus_verification_profile during the last pracver import process leaving behind the other profiles in the
stg_verfication_import table.
Note 2: The import process can also be restarted without executing the above query. But to avoid re-
processing the records accumulated in the staging table (SVI) to CVP, it is necessary to run the above
delete query before restarting the import process.
1. If the import process has failed after the npimportexport process, execute the following query to get
the total number of records failed during this process:
SELECT COUNT(0)
FROM CUS_STG_VERIFICATION_ERROR CSVE, STG_VERIFICATION_IMPORT SVI WHERE
CSVE.EXT_VERIFICATION_TRANS_ID = SVI.EXT_VERIFICATION_TRANS_ID
AND TO_CHAR(SVI.PROCESS_DT,'MM/DD/YYYY HH24:MI:SS') BETWEEN
TO_CHAR(TO_DATE('<Start Date>' , 'MM-DD-YYYY HH24:MI:SS'),'MM/DD/YYYY
HH24:MI:SS') AND
TO_CHAR(TO_DATE('<End Date>' , 'MM-DD-YYYY HH24:MI:SS'),'MM/DD/YYYY
HH24:MI:SS')
Note:
<Start Date> - In pv_import_timestamp.log file created during the pracver
import process the date logged for the message PracVer custom staging table
maint process start is the start date to be provided in the above query for
<Start Date>
2. Now execute the following command to process the failed records from
STG_VERIFICATION_IMPORT table:
$ cd %DRTE_NP_BIN%
$ run_pracver --import --skip-file-processing
Note: Zipping and archiving of the import file cannot be done in this scenario since the input file would
have already been deleted even though the process failed midway.
19.3 Troubleshooting
20.1 Overview
Consolidated Customer View represents the Best Customer view that is generated from the customer
details that are available through different data sources for a NUCLEUS ID. Nominees for the Best
Customer are configured and prioritized in the Nucleus 360 configuration tables. Records and fields can
also be excluded from the process by defining them in the exclusion tables. The exclusion rules
supersede all the inclusion rules. Field-level exclusion is supported only for customer details such as
Name, Suffix, or Prefix. This package exposes methods to process a single customer and to process all
customers.
The CCV process can run from either the Customer Maintenance GUI for a particular customer or from
the batch process for a range of customers.
During CCV process, data generated for CCV can be exported to other systems. There are many
methods of exporting this data as flat file and also also different purposes of exporting.
1. Standard delimeted flat file export (Complete CCV data)
2. AggregateSpend360 export.
3. Mobile Intelligence export.
4. Data is exported to Xtelligent tables for COM export queue system.
2) A profile is updated to the NUCLEUS_ID group after the CCV was last processed.
3) A profile is moved out of the NUCLEUS_ID group after the CCV was last processed.
4) A profile received validation response from external system after the CCV was last processed.
Procedure parameters
nOrgConfigId CCV Organization configuration ID
nProfConfigId CCV professional configuration ID
NumberOfOrgGroups Total number of organization groups to be created. GroupID starts from 1
NumberOfProfGroups Total number of professional groups to be created GroupID starts from 11
20.5.2 RunCCV.exe
RunCCV.exe is a windows application to invoke the CCV process in multiple threads, one thread per
NUCLEUS_ID. Number of concurrent threads is configurable via command line.
STG_CUS_CUSTOMER table should be populated with NUCLEUS_IDs before running CCV. CCV will be
computed only for these NUCLEUS IDs. NUCLEUS_IDs in STG_CUS_CUSTOMER should be divided
into groups. Minimum of 3 groups for professional and 3 groups for organization is recommended.
NUCLEUS_IDs in a group should be of either of type professional or organization. A single group should
not include both professional and organization type of customers.
NOTE: The NPBestCustomer.IdentifyNucleusIDs method can be used to populate the
STG_CUS_CUSTOMER table with processed_ind 0.
20.5.2.1 run_ccv.exe
A new run script is created to instantiate multiple copies of the RunCCV.exe process, by assigning each
instance a specific Group ID from STG_CUS_CUSTOMER table. This script should be run once for CCV
organizations and then for professionals.
$ cd %DRTE_NP_BIN%
$ run_ccv.exe p=<Profile_Type> -c=<Configuration_ID> -t=<No. of threads>
Once a NUCLEUS_ID is processed, PROCESSED_IND in STG_CUS_CUSTOMER table will be set with
indicator 1.
(tUserId NPDataTypes.t_UserId,
nBuffersize PLS_INTEGER DEFAULT 100,
nConfigID Npdatatypes.t_ConfigurationId Npdatatypes.t_ConfigurationId
(tUserId NPDataTypes.t_UserId,
nRefresh NPDataTypes.t_Indicator DEFAULT 0,
nBuffersize PLS_INTEGER DEFAULT 100,
nStartNucId NPDataTypes.t_NucleusId DEFAULT NULL,
nEndNucId NPDataTypes.t_NucleusId DEFAULT NULL
nProcessAllComponents NPDataTypes.t_Indicator DEFAULT 0
nConfigID Npdatatypes.t_ConfigurationId DEFAULT 0
nGroupID NUMBER
run_bob --affiliation
If the parameters StartNucId and EndNucId of NPBestCustomer.batchProcess are specified, then the
total number of records evaluated should be equal to the count returned by the following query.
SELECT count(nucleus_id) FROM cus_customer_info where delete_ind = 0 AND
nucleus_id BETWEEN <nStartNucId> AND <nEndNucId>
$cd %DRTE_NP_BIN%
$runstoredproc -s"pkg_db_maintenance.process_maintenance(51)"
20.9 Troubleshooting
*
ERROR at line 1:
ORA-20215: The nStartNucId and nEndNucId parameters are invalid
ORA-06512: at NPBESTCUSTOMER, line 605
ORA-06512: at line 1
Failure Condition
This could happen when the values to parameter nStartNucId and nEndNucId are not passed correctly.
Solution
Values for nStartNucId or nEndNucId cannot be NULL when run for a range of profiles, and
nStartNucId must be greater than nEndNucId. Refer to section 20.5.4 for parameters that are used by
the CCV process.
Solution
Run the following query at the SQL prompt in order to determine which packages are invalid.
select substr(object_name,1,30) object_name,
object_type,
status,
SUBSTR(TO_CHAR(CREATED,DD-MON-YYYY HH24:MI:SS), 1, 24) created_on,
SUBSTR(TO_CHAR(LAST_DDL_TIME,DD-MON-YYYY HH24:MI:SS), 1, 24)
updated_on
from user_objects
where (object_type LIKE PACKAGE% OR object_type LIKE PROCEDURE% or
object_type LIKE TYPE% )
and (object_name LIKE NP% OR object_name LIKE %DENORM_POPULATE%)
order by 3, 1;
This query displays a list of packages and package statuses.
For example:
OBJECT_NAME OBJECT_TYPE STATUS CREATED_ON UPDATED_ON
NPBESTCUSTOMER PACKAGE INVALID 9/9/02 12:35 9/12/02 17:56
NPBESTCUSTOMER PACKAGE BODY VALID 9/9/02 12:35 9/20/02 11:33
NPCCVTRANSACTIONS PACKAGE VALID 9/9/02 12:34 9/12/02 17:56
NPCCVTRANSACTIONS PACKAGE BODY VALID 9/9/02 12:35 9/10/02 9:34
NPCCVUNIQUE PACKAGE VALID 9/9/02 12:35 9/12/02 17:56
NPCCVUNIQUE PACKAGE BODY VALID 9/9/02 12:35 9/10/02 9:34
NPCOMPARE PACKAGE VALID 9/9/02 12:34 9/12/02 17:56
NPCOMPARE PACKAGE BODY VALID 9/9/02 12:34 9/10/02 9:33
NPCOMPONENTFILTER PACKAGE VALID 9/9/02 12:34 9/12/02 17:56
NPCOMPONENTFILTER PACKAGE BODY VALID 9/9/02 12:34 9/10/02 9:34
NPCUSTCONFIGMAINT PACKAGE VALID 9/9/02 12:41 9/12/02 17:56
NPCUSTCONFIGMAINT PACKAGE BODY VALID 9/9/02 12:41 9/10/02 9:35
NPCUSTCONFIGURATION PACKAGE VALID 9/9/02 12:34 9/12/02 17:56
Once the INVALID packages have been identified, they need to be recompiled.
20.9.3 When the Error Message Range Is Between ORA-20000 and ORA-20999
Failure Description
Aborting Process!!!
NUCLEUS Id: 38179
Error msg: ORA-20207: Duplicate CCVs exist for customer 38179 occurred in
isCustomerUpdated
********************** S T A R T **********************
20.9.4 When the Error Message Range Is Not Between ORA-20000 and ORA-20999
Failure Description
The error message is not between ORA-20000 and ORA-20999.
Failure condition
This is a database-generated error message.
Solution
Based on the Error Number generated, look at the Oracle document to find the Description, Cause, and
Action for that particular error message
8100 Professional
8200 Organization
8500 Customer_Affiliation
3100 Merge_Request
Each Entity has a list of sub-entities which are specified in the CHILD_TAG field in the
XTTRANSACTION_TABLE_1. The following are the child tags used:
Schema ID Entity Name Child_Tag Child Tag Name
81110 Address
81120 Communication
81130 Customer_Identifier
8100 Professional
81140 Professional_Specialty
81150 Professional_Education
81160 State_License
82110 Address
8200 Organization 82120 Communication
82130 Customer_Identifier
Element/Tag Name in
Col. # Column Name Field Description
Output Message
Element/Tag Name in
Col. # Column Name Field Description
Output Message
Element/Tag Name in
Col. # Column Name Field Description
Output Message
Element/Tag Name in
Col. # Column Name Field Description
Output Message
Element/Tag Name in
Col. # Column Name Field Description
Output Message
Element/Tag Name in
Col. # Column Name Field Description
Output Message
Element/Tag Name in
Col. # Column Name Field Description
Output Message
Element/Tag Name in
Col. # Column Name Field Description
Output Message
Element/Tag Name in
Col. # Column Name Field Description
Output Message
Element/Tag Name in
Col. # Column Name Field Description
Output Message
Element/Tag Name in
Col. # Column Name Field Description
Output Message
70 CODE_7
Element/Tag Name in
Col. # Column Name Field Description
Output Message
73 CODE_10
74 CODE_11
75 CODE_12
76 CODE_13
77 CODE_14
78 CODE_15
79 PROCESSED_DAT Gets updated with the latest time stamp,
E when the export service processes the
record.
21.7 Troubleshooting
The following sample Windows Services panel screenshot shows the NPTransactionPublisher in stop
mode.
23. MERGE
23.1 Overview
The Merge process is responsible for selecting and identifying merge candidate records based on the
Merge configuration created using the Configuration Assistant GUI. Merge includes Customer Merge and
Address Merge functionality.
Option Description
-? Detailed Help
-v Version Information
-V Extended Version Information (IMS Health Internal)
-l<filename> Log file name <filename>. This is a required parameter.
-c<filename> Database configuration file name <filename> (defaulted to dbcfg.txt)
-r Dry Run mode (IMS Health Internal)
-i Run npmerge incrementally. Valid only for customer (Prof/Org) merge.
-t Cleanup mode only. Truncate the output tables.
-e Populate Xtelligent table
Option Description
-s<Sentence Set ID> Sentence Set that will be used for merge. It is required in non-cleanup
mode.
Note: npmerge.exe returns an exit code of zero if the process completes successfully; otherwise a non-
zero value is returned.
SYNOPSIS
run_merge_extract.exe -r=<ruleid>
run_merge_extract.exe -c
DESCRIPTION
Script to run the Merge Extract.
OPTIONS
--rule, -r
Rule ID. Its a required parameter in non-cleanup mode.
--all, -a
This option will enable the process to run for all records. If this
option is not specified the Merge process runs in incremental mode.
Incremental mode will process only those records that are modified
since the last merge run. Incremental mode is Not Valid for Merge
of type Address.
--clean, -c
Cleanup mode only. Truncate the merge staging tables.
--help, -h, -?
Usage. Displays the usage of this script.
cd %DRTE_NP_BIN%
run_merge_extract --rule=<rule_id> --all
Description
In the above sample message, the descriptions for the highlighted messages are explained below.
Records processed:
This count denotes the total number of records selected for processing, based on the merge rule defined
for the specific entity.
Groups processed:
This count denotes the total number of groups that are processed to identify potential merges.
In the case of a customer merge, profiles that have the same NUCLEUS_IDs are grouped together and
the group key is the NUCLEUS_ID. In the case of an address merge, a group is created based on the
fields specified in the Field Selection property of a merge configuration. Address records having the
same values for the fields specified in the Field Selection will be grouped together and a group key is
created by concatenating the field values. A screenshot taken from the Configuration Assistant GUI is
provided here as an example for Address Merge Field Selection (Group Key Selection).
Note: When merge records are exported to XTTRANSACTION_TABLE_1 for publishing, the group key
value is written to the TEXT_1 column for internal debugging purpose. This value is not published in the
outgoing XML message.
Skipped single record groups:
This count denotes the total number of groups having a single record associated with it.
Total merge groups inserted:
This count denotes the count of Groups processed minus the count of Skipped single record groups.
24.1 Overview
The purpose of this utility is to invoke Oracle Stored Procedures and functions from the operations work
environment. The database user name, password, and instance name are obtained from the dbcfg.txt
configuration file. An exit code of zero 0 will be returned if the process completes successfully; otherwise
a non-zero value will be returned. DBMS Output generated by the procedure/function that is being
invoked is captured and logged to the log file.
Parameter Description
-l Log file name
-s Stored procedure command line
-f Stored function command line
-a File that contains anonymous PLSQL block
-c Database configuration file from which the connection details are read, default is dbcfg.txt
-p Database configuration file Block/Pool Name, default is NUCLEUSPHARMA
-? Display command line options
-v Version Information
-V Extended Version Information
25.1 Overview
This utility extracts the ZIP (Zone Improvement Plan) codes and their associated information from
Firstlogics USPS data files. Currently only zip code information pertaining to the U.S. is extracted. The
extract can be done to the console, file, or a database table. This process should be run after every
update for Firstlogics USPS data files is received so that the latest USPS data is used. The process does
not maintain any history; inactive zip codes will not be returned by the extract.
Option Description
-t Extract zip information into FND_POSTAL_CODE. Note that the complete table will
be refreshed.
-c Extract zip information to Console
-d<V><v><H><h> Display zip information in extract <H>orizontally or <V>ertically. Default is horizontal.
-s<ZipCode> Extract zip information Starting from Zip <5 digit zip>
-e<ZipCode> Extract zip information Ending at Zip <5 digit zip>. If -e is not specified, only single
zip is extracted.
-b<Block Name> Ace Block Name
Common usage examples:
To refresh the FND_POSTAL_CODE table in the database:
$ npzipinfo lzipextract.log t -bIACE_USA
To get information for a single zip on the console, horizontally pipe-delimited:
$ npzipinfo lzipextract.log c s07828 dH -bIACE_USA
To get information for a zip range on the console, vertically column-prefixed:
$ npzipinfo lzipextract.log c s07828 e07856 dV -bIACE_USA
To get information for all the zip codes to a file, horizontally pipe-delimited:
$ npzipinfo lzipextract.log fzipextract.txt dH -bIACE_USA
25.5 Troubleshooting
Refer to Appendix E for ACE/IACE errors.
25.5.4 Pls specify the destination of the extract using -f, -t or -c.
Failure Description
Invalid Arguments.
Please specify the destination of the extract using -f, -t or -c.
Failure Condition
This error occurs when the destination of the output is nor specified.
Solution
Provide a valid option for output of the extract. The valid options are f(file), -t(FND_POSTAL_CODE
table) and c(console). Refer to section 25.2 for valid options or combination of options for running the
process. E.g. IACE_USA is a valid ACE block name.
26.1 Overview
The NPImportExport process is a configuration-file-based general-purpose utility for accomplishing the
following tasks:
Export data from a database table to a single file.
Import data from a file to a single database table.
Copy data from one database to another database using SQL statements.
The data transfer is made possible based on a configuration file. The process supports delimited and
fixed-length file formats for I/O files.
Note: Nucleus 360 import/export utility creates user specified log file and bad file when process starts. On
successful execution of this process the bad file will be automatically cleaned up by the process. In case
of failure the failed record will be written to the bad file and its not deleted. If the utility stops
unconditionally due to an unhandled error, the bad file created by the process may not be cleaned up.
Option Description
-S<source-block-name> Source database configuration block name. Default for export process
is NUCLEUSPHARMA.
-D<destination-block-name> Destination database configuration block name. Default for import
process is NUCLEUSPHARMA.
-F<commit-frequency> Commit frequency. Default value is 1000.
-b<bad-record-file-name> Bad records file name
Use of the utility is based on the different sets of configuration files as follows:
Data Export
For example: To extract data from the DQT_PROFESSIONAL_PROFILE table to an output file,
prac_ver_full_refresh_output.txt, the following syntax may be used:
$ npimportexport -llogfile.log -p1
-fpractitioner_verification_full_refresh.cfg -t0
-oprac_ver_full_refresh_output.txt SAVNP -F5000 -bbadrecfile.bad -d4
Data Import
For example: To import data from the pracver_import_data.txt file to STG_VERIFICATION_IMPORT
table, use the following command:
$ npimportexport -llogresults.log -p0 -fpracver_import.cfg
-ipracver_import_data.txt -t1 -DNUCLEUSPHARMA -F1000
-bbadRecFile.bad -d4
Data Copy
For example: To copy data from SAM_AUTHORIZATION_FORM (Sample Guardian) to
SAM_AUTHORIZATION_FORM (Nucleus 360), use the following command:
$ npimportexport -llogresults.log -p2 -fsignature_auth_form.cfg -t1
-SSAMPLEGUARDIAN -DAVNP -F1000 -bbadrecfile.bad -d4
26.5 Troubleshooting
This section provides information about certain error messages.
Solution
Pass the process type option -p with a valid value (0, 1 or 2) for Import, Export and Copy. Refer to
section 26.2 for the command line parameter used by the NPImportExport process to run the process with
valid parameters.
Solution
Provide a valid format file and pass the file name with -f option. Refer to section 26.2 for the command
line parameter used by the NPImportExport process to run the process with valid parameters.
Solution
Provide destination connection block name as specified in database configuration file (dbcfg.txt) with -S
option. Refer to section 26.2 for the command line parameter used by the NPImportExport process to run
the process with valid parameters.
27.1 Overview
Customer (Professional/Organization), Profile (Professional/Organization), or Address data in the Nucleus
360 database can be extracted to XML-formatted files. Using a two-step process, data from Nucleus 360
database core tables is copied to intermediate staging tables (Oracle Nested objects) and step, XML files
are generated from the staged data via IMS Healths Xtelligent.
Parameter Description
directly.
--candidates, -c Generates candidates to extract. This option does not run the extract
process after generating candidates.
--create_file Number of extract processes to run in parallel. Defaulted to 1. This option is
not valid for standardized address extract and for generating candidates.
--cleanup Option to clean up extract object tables or queue tables.
--commit_frequency Database commit interval, defaulted to 1000.
--help, -h, -? Usage. Displays the usage of this script.
--version, -v Displays version information.
--copies Number of extract processes to run in parallel. Defaulted to 1. This option is
not valid for standardized address extract and for generating candidates
The process can be run for customer (Professional/Organization), Profile (Professional/Organization), or
Address data extracts.
27.7 Troubleshooting
27.7.1.1 Explanation
This problem occurs if the configuration file is not modified properly.
27.7.1.2 Solution
Make sure that the placeholders in configuration files such as DATABASE_USER_PASSWORD,
DATABASE_USER, DATABASE_INSTANCE, DATABASE_SERVER_IP,
EXTRACT_OUTPUT_FILE_PATH, ODBC_SYSTEM_DSN, and %DRTE_NP_DEFS% are replaced by
appropriate values.
IMS Health Operations Guide Page 259 of 364
Rev. 1 October 1, 2015 371339087.doc
2015 IMS Health, Proprietary and Confidential Information
Nucleus 360 Version 5.2.0
Operation Instructions
28.3.2.1 Logging
The log output of the export run is stored in a log file in the DRTE_NP_LOGS directory, and the log file
name is of the following format:
NP_MA_EXPORT.STARTEXPORTPROCESS_<YYYYMMDDHHMISS>.log. If running in sql prompt log
file is not generated.
Alternate method running the above command for versions prior to 3.1:
sqlplus <DBUserId>/<DBPassword>@<ConnectString>
SQL> exec
NP_MA_IMPORT.NP_MA_IMPORT.STARTIMPORTPROCESS( '/opt/oracle/adm
in/aedb010d/utils','test11.xml',0,0,1000,1000)
28.3.7 Logging
The output of the batch run is stored in a log file in the DRTE_NP_LOGS directory, and the log file name
is of the following format: NP_MA_IMPORT.STARTIMPORTPROCESS_<YYYYMMDDHHMISS>.log. If
running in sql prompt, the log file is not generated
28.3.10 Troubleshooting
28.3.10.1.1 Explanation
This occurs if the directory path specified is not valid.
28.3.10.1.2 Solution
1. Create a folder in db-server and make sure files can be read and written in this folder.
2. Set the UTL_FILE_DIR = * in the database parameter file. After setting this parameter, the database
should restart.
Note: Consult the database administrator (DBA) for creating the directory and setting the parameter
value.
29.1 Overview
The ONE Point application is a process manager for Nucleus 360 processes/services. It provides a single
entry point to execute the processes in real-time mode and batch mode. Processes that are managed by
the ONE Point application are configured in rt_config.xml file in XML format.
To launch the application:
1. Open a Windows command prompt.
2. Execute np_env.bat file from the installation root directory.
3. Type nponepoint.exe and press Enter from the installation BIN directory.
<install_root_dir>np_env.bat
<install_root_dir>cd np\bin
<install_root_dir\np\bin>nponepoint.exe
Note: The ONE Point application will not validate the arguments passed to it. This application passes the
values as is to the corresponding application assuming the values passed are valid.
To launch the application using a custom configuration file:
The default configuration file used by ONE Point is rt_config.xml. In order to use a different configuration
file, launch the application from the command prompt by specifying the configuration file name.
1. Open a Windows command prompt.
2. Execute np_env.bat file from the installation root directory.
3. Type nponepoint.exe followed by the configuration file name and press Enter from the installation
BIN directory. Substitute <custom_file.xml> with the actual file name.
<install_root_dir>np_env.bat
<install_root_dir>cd np\bin
<install_root_dir\np\bin>nponepoint.exe <custom_file.xml>
29.3 Convention
29.4 Operation
To pause a batch process when another batch process is running, if they are mutually exclusive from
an execution perspective and should not be run in parallel.
To warn the user when two real-time configurations are started simultaneously.
Refer to the rt_config.xml file the for Data Import process, which is configured in real-time and batch
mode using the exclusive key option.
</Entity>
</Entity>
<Configuration id="1000"
type="continuous"
name="Real Time - Data Import"
dbcfgBlock="NUCLEUSPHARMA"
delay="10"
exclusiveKey="Exclusive Real Time Execute">
</Configuration>
<Configuration id="5000"
type="request"
name="Manual - Process Management"
dbcfgBlock="NUCLEUSPHARMA">
<Entity id="500020" type="db_procedure"
name="NP_Data_Quality_Engine.main"
displayName="Data Quality" logFile="..\log\op_500020.log"
IMS Health Operations Guide Page 268 of 364
Rev. 1 October 1, 2015 371339087.doc
2015 IMS Health, Proprietary and Confidential Information
Nucleus 360 Version 5.2.0
Operation Instructions
29.5.5 ListItemElement
This element is valid only for arguments of type lookup. ListItem is a child element of LookupList and is
used to define values that are displayed in a drop-down list to the user.
Attribute Name Description Valid Values Default Value
value Argument value
description Argument value description that is
displayed to the user in a drop-down list.
30.1 Overview
Mapping Configuration Generator is used to extract data source elements such as mappings, pattern
sets, patterns, merge rules, and lists from the configuration tables of a Nucleus 360 database.
Configuration Assistant facilitates extraction of these data source elements in XML format, whereas the
Mapping Configuration Generator extracts these data source elements in SQL format.
A configuration file, mapping_config.cfg, is used by this application. It contains a list of all the
configuration tables that need to be extracted as SQL SELECT statements.
Following is the list of current configuration tables that are set up in the mapping_config.cfg file.
Tables Tables
np_cfg_mapping np_cfg_pattern_def_component
np_cfg_map_source_type np_cfg_pattern
np_cfg_map_parameter_value np_cfg_pattern_def_element
np_cfg_fld np_cfg_pattern_variable
np_cfg_task np_cfg_pattern_variable_def
np_cfg_task_properties np_cfg_pattern_variable_output
np_cfg_map_field_class_dtl np_cfg_pattern_function
np_cfg_map_code_parser np_cfg_pattern_function_config
np_cfg_list np_cfg_pattern_set_function
np_cfg_list_item np_cfg_pattern_set_func_config
np_cfg_pattern_set np_rule
np_cfg_data_src_pattern_set np_rule_detail
np_cfg_pattern_def_parameter np_rule_detail_execution
np_cfg_pattern_set_pattern np_rule_detail_implementation
Option Description
-V Extended Version Information (IMS Health Internal)
-c<db-config-file- Database configuration file. Default is dbcfg.txt
name>
-l<log-file-name> Log file
-n Dry Run. Changes will not be applied to the database.
-C<config-file- Configuration file name
name>
30.2.2 Output
Mapping Configuration Generator generates INSERT scripts for all the configuration tables listed in
mapping_config.cfg. These SQL scripts should be placed in folder <root
directory>\np\datasource\Mapping_Configuration to be executed when database configuration is
done. Refer to the Nucleus 360 Configuration Check List for details on using this application.
30.3 Troubleshooting
Solution
Make sure that the configuration file name and path are passed correctly and the file is present at the
specified path.
31.5 Troubleshooting
32.1 Overview
Mapping Configuration Generator is used to extract data source elements such as mappings, pattern
sets, patterns, merge rules, and lists from the configuration tables of a Nucleus 360 database.
Configuration Assistant facilitates extraction of these data source elements in XML format, whereas the
Mapping Configuration Generator extracts these data source elements in SQL format.
A configuration file, mapping_config.cfg, is used by this application. It contains a list of all the
configuration tables that need to be extracted as SQL SELECT statements.
Following is the list of current configuration tables that are set up in the mapping_config.cfg file.
Tables Tables
np_cfg_mapping np_cfg_pattern_def_component
np_cfg_map_source_type np_cfg_pattern
np_cfg_map_parameter_value np_cfg_pattern_def_element
np_cfg_fld np_cfg_pattern_variable
np_cfg_task np_cfg_pattern_variable_def
np_cfg_task_properties np_cfg_pattern_variable_output
np_cfg_map_field_class_dtl np_cfg_pattern_function
np_cfg_map_code_parser np_cfg_pattern_function_config
np_cfg_list np_cfg_pattern_set_function
np_cfg_list_item np_cfg_pattern_set_func_config
np_cfg_pattern_set np_rule
np_cfg_data_src_pattern_set np_rule_detail
np_cfg_pattern_def_parameter np_rule_detail_execution
np_cfg_pattern_set_pattern np_rule_detail_implementation
32.2.2 Output
Mapping Configuration Generator generates INSERT scripts for all the configuration tables listed in
mapping_config.cfg. These SQL scripts should be placed in folder <root
directory>\np\datasource\Mapping_Configuration to be executed when database configuration is
done. Refer to the Nucleus 360 Configuration Check List for details on using this application.
32.3 Troubleshooting
Failure Condition
Configuration file name is not present in the specified path.
Solution
Make sure that the configuration file name and path are passed correctly and the file is present at the
specified path.
33.1 Overview
mqtest.exe is a utility program that is used to test the real-time message queue configuration for server
layer processes.
Parameter Description
-d Destination queue name
-o Operation [ READ | WRITE | DELETE | LISTEN | COPY | COUNT | PURGE ]
Note: LISTEN operation is applicable only for Oracle Advanced Queuing. Not
applicable for TIBCO and WebSphere MQ.
-m Message Id
-f Import data from file to destination
-s Source queue name. Valid only for COPY operation.
-c Database configuration file from which the message queue connection details are
read, default is dbcfg.txt
-r Message recipient or consumer name, defaulted to NP_APP_SL_MQTEST.
Applicable only for Oracle Advanced Queuing. Not applicable for TIBCO and
WebSphere MQ.
-? Display command line options
-v Version Information
-V Extended Version Information
33.3.3 To delete all messages from a queue without displaying them on the screen
(e.g. WBI.TO.NP.QL)
$cd %DRTE_NP_BIN%
mqtest -oPURGE -dWBI.TO.NP.QL
33.3.5 To push a XML message file contents (e.g. test.xml) to the queue (e.g.
WBI.TO.NP.QL)
$cd %DRTE_NP_BIN%
mqtest -oWRITE -dWBI.TO.NP.QL -ftest.xml
34.1 Overview
Unique_ID_Generator.exe is a utility program that is used to generate unique keys for each distinct
address. To configure Unique key for a entity, refer to configuration guide (NUC_5.2.0_CG_R1.doc).
Parameter Description
-? Help. Display command line options.
-e Entity ID.
This is required parameter. A valid value should be passed.
318001 (Unique Address Configuration).
-m Mode of Operation. Valid operation modes are
This is a required parameter. A valid value should be passed.
1 - Incremental, 2 - Full Refresh, 3 - Process History Data and 4 - Single
Record.
-r Record ID. When mode of operation is 4, then record id should be passed using
this partameter.
-l Log file name. If file name has 'datetime' string then it will be expanded to
YYYYMMDD_HHMMSS format.
This is a required parameter. A valid value should be passed.
34.3.1 Output
After successful run, Unique ID Generator shows the total number of records processed and total unique
ids generated in the log file.
Unique ID Generator generates a unique key for each distinct address and inserts into
UNIQUE_ADDRESS entity. CUS_ADDRESS.GLOBAL_ID field is updated with these unique keys if not
present.
34.4 Troubleshooting
35.1.1 Usage
$ cd %DRTE_NP_BIN%
$ runstoredproc spkg_db_maintenance.process_maintenance(<ID>)
Replace the <ID> with the appropriate value from the following table.
ID Description
5 Foundation Maintenance
6 Post-Delta Process
7 Pre-Match Pre-Process (Post-Hash)
16 Pre-ID Assign Process
17 Post-ID Assign Process
30 Match Pre-Process (Post Match Pre Process)
31 Pre-CCV Process
32 DQT Table Process
33 Pre-Delta Process
34 Staging Verification Queue Population
35 OPScrub Table Maintenance
36 Reconciliation Extract
37 Practitioner Verification Export
38 Practitioner Verification Import
39 Sampling Status Staging Table Population
40 Re-verification Queue Population
50 Match Index Process
100 Pre-Address Extract Pre-Process
101 Pre-Profile Extract Pre-Process
102 Pre-Customer Extract Pre-Process
103 NP Configuration Tables
104 Post-Data Quality
105 Post-Pre-ID Assign
ID Description
106 Pre-Hash
107 Post-Standardizer
108 Post-Merge
36.1.1 NUC360Standardizer
Service Type Windows Service
Description This service is used for address standardization. It internally calls the
Nucleus address standardization daemon. It is generally used by
Customer Maintenance to standardize the address while creating
profiles.
Nuc Server Type App Server
Location in Server The Service executable can be located by viewing the properties of the
service. Ideally it is available in the <NP_ROOT>\np\bin folder.
Status In the Windows Services console, look for this service. The Status
column shows the current status of this service (whether it is running or
not).
Launch Options This service is by default configured to start automatically when the app
server boots.
Launch Parameters Default parameters are configured during installation in the config file. No
additional parameters are required for launching this service. However, to
change any parameter value, change it in the config file as mentioned
below.
Config File nuc360standardizer.exe.config
This file will be available at the same place where the service executable
is located.
Log File Log file is an argument to the daemon process. The name and location of
the log file can be found in the configuration file.
Restart Steps This service can be started in two ways.
1. If it is configured in the ONE Point console, it can be restarted
from ONE Point.
2. It can be started or restarted, or stopped, from the Windows
Services console.
36.1.2 Nuc360DataImport
Service Type Windows Service
Description This service is used to manage customers in Nucleus 360. It internally
invokes npdataimport.exe in a port listening mode. Hence any data
pushed to this port in a valid format will be received and responded to.
Nuc Server Type App Server
Location in Server The Service executable can be located by viewing the properties of the
service. Ideally it is available in the <NP_ROOT>\np\bin folder.
Status In the Windows Services console, look for this service. The Status
column shows the current status of this service (whether it is running or
not).
Launch Options This service is by default configured to start automatically when app
server boots.
Launch Parameters Default parameters are configured during installation in the config file. No
additional parameters are required for launching this service. However, to
change any parameter value, change it in the config file as mentioned
below.
Config File nuc360dataimport.exe.config
This file will be available at the same place where the service executable
is located.
Log File Log file is an argument to the dataimport process. The name and location
of the log file can be found in the configuration file.
Restart Steps This service can be started two ways.
1. If it is configured in the ONE Point console, it can be restarted
from ONE Point.
2. It can be started or restarted, or stopped, from the Windows
Services console.
User/password The database userID\password to which this service points is configured
configuration in the dbcfg.txt file. The dbcfg.txt file can be found in the
<NP_ROOT>\np\bin folder.
36.1.3 Nuc360PollingSvc
Service Type Windows Service
Description This service is used to keep other services active. When they are not
used for a long time, services such as customer lookup and Customer
manage release connectivity to the database session and free the buffer.
When the services are called the next time, it re-establishes the database
connection and then performs the operation. This takes a longer time and
causes inconvenience to end users. Hence the polling service is used to
make a fake call at regular time intervals to those services to keep them
awake all the time.
Nuc Server Type Web Server
Location in Server The Service executable can be located by viewing the properties of the
service.
Status In the Windows Services console, look for this service. The Status
column shows the current status of this service (whether it is running or
not).
Launch Options This service is by default configured to start automatically when the web
server boots.
Launch Parameters Default parameters are configured during installation in the config file. No
additional parameters are required for launching this service. However, to
change any parameter value, change it in the config file as mentioned
below.
Config File Nuc360PollingSvc.exe.config
This file will be available at the same place where the service executable
is located.
Polling frequency can be configured or changed in the config file. Look
for PollingTimeInSec and provide the required value. This value is in
seconds and the fake service call will be made at this interval.
Log File This service is used for fake service calls and hence no log file is used.
Restart Steps This service can be started two ways.
1. If it is configured in the ONE Point console, it can be restarted
from ONE Point.
2. It can be started or restarted, or stopped, from the Windows
Services console.
User/password This service uses app user name and password, and they are configured
configuration in the configuration file. Look for the string UserName and Password
in the configuration file and make necessary changes to these
arguments.
Launch Parameters This service launches with default parameters. No parameters are
required to be modified by user.
Config File This service comes with its default configuration files and no modification
is required by user.
Log File This service has its own log file and it does not take any log file
parameter. Ideally it writes to the C:\Program Files (x86)\Apache
Software Foundation\Tomcat 5.5\logs folder.
Restart Steps It can be started or restarted, or stopped, from the Windows Services
console.
User/password This service does not use any userID for authentication. It is just a layer
configuration to communicate between web apps from client and web server.
36.1.6 CustomerLookup
Service Type Web Service
Description This web service is used to look up customers in the Nucleus 360
database.
Nuc Server Type Web Server
Location in Server Launch IIS manager by typing inetmgr. Then go to Sites->Default Web
Site->CustomerLookup. Right-click CustomerLookup and click
Explore. This opens the CustomerLookup installed folder.
Status IIS Web Site must be running for CustomerLookup to work.
Launch Options IIS by default starts when the web server is booted.
Launch Parameters Default parameters are configured during installation in the config file. No
additional parameters are required for launching this service. However, to
change any parameter value, change it in the config file as mentioned
below.
Config File web.config
This file will be available in the CustomerLookup installed folder.
Log File Log file path is configured in the config file. Look for logFilePath in the
web.config file which points to the log file.
Restart Steps Since this is a web service, we cannot restart it. However, if this service is
not responding, the Default Web Site can be restarted.
Note: Restarting Default Web Site impacts all the web services hosted
under that.
User/password The database information to which this web service points for customer
configuration search is configured in the web.config configuration file. Look for
connectionStrings in the web.config file to make any
user ID\password changes.
36.1.7 CustomerLookup2
Service Type Web Service
Description This web service is used to look up customers in the Nucleus 360
database.
Nuc Server Type Web Server
Location in Server Launch IIS manager by typing inetmgr. Then go to Sites->Default Web
Site->CustomerLookup2. Right-click CustomerLookup2 and click
Explore. This opens the CustomerLookup2 installed folder.
Status IIS Web Site must be running for CustomerLookup2 to work.
Launch Options By default, IIS starts when the web server is booted.
Launch Parameters Default parameters are configured during installation in the config file. No
additional parameters are required to launch this service. However, to
change any parameter value, change it in the config file as mentioned
below.
Config File web.config
This file will be available at the CustomerLookup2 installed folder.
Log File Log file path is configured in the config file. Look for logFilePath in the
web.config file, which points to the log file.
Restart Steps Since this is a web service, we cannot restart it. However, if this service is
not responding, the Default Web Site can be restarted.
Note: Restarting Default Web Site impacts all the web services hosted
under that.
User/password The database information to which this web service points for customer
configuration search is configured in the web.config configuration file. Look for
connectionStrings in the web.config file to make any
user id\password changes.
36.1.8 QuickSearch
Service Type Web Service
Description This web service is used to look up customers in Nuc 360 5.2 portal and
Customer Maintenance by selecting the Quick Search option rather than
searching in the database. This search is performed on index-based files.
Nuc Server Type Web Server
Location in Server Launch IIS manager by typing inetmgr. Then go to Sites->Default Web
Site->Nuc360QuickSearch5.2. Right-click on Nuc360QuickSearch5.2
and click Explore. This opens the Nuc360QuickSearch5.2 installed
folder.
Status IIS Web Site must be running for Nuc360QuickSearch5.2 to work.
Launch Options By default, IIS starts when the web server is booted.
Launch Parameters Default parameters are configured during installation in the config file. No
36.1.9 NUC360PingFederateService
Service Type Web Service
Description This is a web service used to authenticate and authorize client users who
log in to Nucleus 360 apps over the internet.
Nuc Server Type Web Server
Location in Server Launch IIS manager by typing inetmgr. Then go to Sites->Default Web
Site->NUC360PingFederateService. Right-click
NUC360PingFederateService and click Explore. This opens the
NUC360PingFederateService installed folder.
Status IIS Web Site must be running for NUC360PingFederateService to work.
Launch Options By default, IIS starts when the web server is booted.
Launch Parameters Default parameters are configured during installation in the config file. No
additional parameters are required to launch this service.
Config File web.config
Log File This web service does not write to any log file.
Restart Steps Since this is a web service, we canno restart it. However, if this service is
not responding, the Default Web Site can be restarted.
Note: Restarting Default Web Site impacts all the web services hosted
under that.
User/password This service does not use any user ID\password.
configuration
36.1.11 ProfileDetailsService
Service Type Web Service
Description This is a web service used to get profile details in the Nuc360 Portal.
Nuc Server Type Web Server
Location in Server Launch IIS manager by typing inetmgr. Then go to Sites->Default Web
Site->Nuc360ProfileDetails5.2. Right-click Nuc360ProfileDetails5.2
and click Explore. This opens the Nuc360ProfileDetails5.2 installed
folder.
Status IIS Web Site must be running for Nuc360ProfileDetails5.2 service to
work.
Launch Options By default, IIS starts when the web server is booted.
Launch Parameters Default parameters are configured during installation in the config file. No
additional parameters are required to launch this service.
Config File web.config
Log File This web service does not write to any log file.
Restart Steps Since this is a web service, we cannot restart it. However, if this service is
not responding, the Default Web Site can be restarted.
Note: Restarting Default Web Site impacts all the web services hosted
under that.
User/password The database information to which this web service points for customer
configuration search is configured in the web.config configuration file. Look for
connectionStrings in the web.config file to make any
user ID\password changes.
36.1.12 CustomerManage
Service Type Web Service
Description This web service is used to manage customers in the Nucleus 360
database. Managing the customer means adding/editing/deleting
customers for the configured data source. This service internally invokes
npdataimport.exe listening to a port in the Nucleus 360 app server.
Nuc Server Type Web Server
Location in Server Launch IIS manager by typing inetmgr. Then go to Sites->Default Web
Site-> CustomerManage. Right-click CustomerManage and click
Explore. This opens the CustomerManage installed folder.
Status IIS Web Site must be running for CustomerManage to work.
Launch Options By default, IIS starts when the web server is booted.
Launch Parameters Default parameters are configured during installation in the config file. No
additional parameters are required to launch this service. However, to
change any parameter value, change it in the config file as mentioned
below.
Config File web.config
This file will be available at the CustomerManage installed folder. If the
app server or the port that is used in the app server is changed for any
reason, the same change should reflect in the configuration file. Look for
HostName and Port in the web.config file and make necessary
changes. Restart Default Web Site.
Log File Log file path is configured in the config file. Look for logFilePath in the
web.config file, which points to the log file.
Restart Steps Since this is a web service, we cannot restart it. However, if this service is
not responding, the Default Web Site can be restarted.
Note: Restarting Default Web Site impacts all the web services hosted
under that.
User/password The database information to which this web service points for managing
configuration customers is configured in the web.config configuration file. Look for
connectionStrings in the web.config file to make any
user ID\password changes.
3. On the rightmost side of the console, you will see the following. This means Web Site is running. If it
is not, click Start. If the Web Service is still not responding as expected, click Restart, or click Stop
and Start again.
37.1 Overview
Adhoc match process will help identify the corresponding CCV profiles within the Nucleus system for the
pre-defined format input data.
37.3.1 Running the Adhoc Match Process Using the Komodo Job Scheduler
Scheduler:
1. Right-click on 100_AdhocMatch_Prof_Target job and click on Schedule Job. This goes to
the Schedule page.
Schedule the job by setting the required scheduler parameters.
OR
Click on Scheduler tab.
Select Nucleus360_AdhocMatch project from Project dropdown and
100_AdhocMatch_Prof_Target job from Job dropdown.
Set required parameters as shown in the following screenshot to schedule the job.
2. Right-click on 200_AdhocMatch_Prof job and click on Schedule Job. This goes to the
Schedule page.
Schedule the job by setting up the required scheduler parameters.
OR
Click on Scheduler tab.
Select Nucleus360_AdhocMatch project from Project dropdown and 200_AdhocMatch_Prof
job from Job dropdown.
Set required parameters as shown in the following screenshot to schedule the job.
Logger:
Click on Logger and see the status of the jobs as shown below screen shot.
Click on Job (200_Adhoc_Match_Prof) link and see the details of the job status based on Log
Type dropdown value as shown following.
Dashboard:
Click on Dashboard and select a row to see the statistics of that job run flow, as shown in the
following screenshot.
SYNOPSIS
AdhocMatch_Prof.ps1 "NP_ROOT" "1"
Script will run the Adhoc match process for the input files and cleanup
the intermediate files created for processing each input file.
AdhocMatch_Prof.ps1 "NP_ROOT"
This will also run the Adhoc match process for the input files and
cleanup the intermediate files created for processing the input file.
37.4 Troubleshooting
If Adhoc match process fails during any stage, the user can copy the input data files in
<NP_ROOT>\third_party\np\in\AdhocMatch folder and restart the process. Refer to section Restarting
the Adhoc Match process 37.5 for further details. If user is not sure about the process to produce the
output for the input data, it is recommended to run the Adhoc Match process in Debug mode. Refer to the
previous section for running the process in Debug mode.
37.4.1 Adhoc Match process completes but output files within .zip file are empty
37.4.1.1 Explanation
This problem occurs if the Adhoc Match configuration files are not modified properly.
37.4.1.2 Solution
Perform the following steps:
1. Check the configuration files for correctness
2. Delete the intermediate configuration and output files
3. Delete the entries present in NUC_ADHOC_HASH_KEYS table entry for the last run_id.
4. Copy the input Data file into <NP_ROOT>\third_party\np\in\AdhocMatch folder
5. Restart the Adhoc Match process (refer to section 37.5 for more details).
37.4.2.1 Explanation
Within the matchlog.log file, if user gets the following kind of errors or match completed successfully with
0kb output files.
SocketException: System.Net.Sockets.SocketException (0x80004005): No connection could be made
because the target machine actively refused it <Application Server IP>:<Address Standization application
running Port number>
37.4.2.2 Solution
Perform the following steps:
1. Restart the address and name standardization daemon in Nucleus 360 Application Server.
2. If Adhoc Match process is still running, stop the process.
3. Delete the intermediate configuration and output files.
4. Delete the entries present in NUC_ADHOC_HASH_KEYS table entry for the last run_id.
5. Copy the input Data file into <NP_ROOT>\third_party\np\in\AdhocMatch folder.
6. Restart the Adhoc Match process (refer to section 37.5 for more details).
[NUCLEUSPHARMA]
POOLNAME=NUCLEUSPHARMA
MAXCON=100
LOGON=scott
PWD=pmcav
HOSTID=sample
RDBMS=ORACLE
LOGFILE=np_pool.log
LOGDIRECTORY=.
LOGENABLED=n
LOGSEPARATOR=------
APPMONITORENABLED=n
[NPMS]
LOGON=np_admin
PWD=xxxxxxxx
HOSTID=aenp01
PLUGIN=NPMsgServiceAQ.dll
Slow performance
Effective use of the Oracle Cost-based Optimizer (CBO)
Execution plan is the series of steps Oracle will perform to retrieve data from the tables. A component of
the database known as the Optimizer decides the execution plan. Oracle supports two types of
optimizers, cost-based optimizer (CBO) and rule-based optimizer (RBO).
To determine the best execution path, the CBO uses database information such as table size, number of
rows, key spread, and so forth, rather than rigid rules. The statistics required for the CBO are available
once an object (table/index) has been analyzed. The frequency of analyzing is dependent on the rate of
change and on the size of the objects. When objects are analyzed, Oracle gathers statistics about the
object. If statistics are missing, then the CBO can only guess what data might be in the table. The
statistics allow the optimizer to determine the most likely size of the result set of rows queried from each
table.
Nucleus 360 processes are designed to take advantage of the CBO. Oracle recommends using the CBO
and the role of the rule-based optimizer will probably diminish over time.
Refer to the process operations instructions for the command for analyzing the objects.
[NUCLEUSPHARMA]
POOLNAME=NUCLEUSPHARMA
MAXCON=10
LOGON=COMMON_USER
PWD=<ENCRYPTED PASSWORD>
HOSTID=SAMPLE_DB
RDBMS=ORACLE
LOGFILE=np_pool.log
LOGENABLED=n
LOGSHOWDATETIME=y
LOGSEPARATOR=---
LOGDIRECTORY=c:\np\log
[NPMS]
LOGON=np32_1100_admin
PWD=<ENCRYPTED PASSWORD>
HOSTID=aenp01
PLUGIN=NPMsgServiceAQ.dll
Solution
Add the user to the Nucleus 360 database.
Solution
Add this variable to My Computer | Properties | Environment.
Exceptions
Following are some variations of the most common errors that you may receive; others may exist.
1. Incomplete forms, empty fields, grids, and dropdown lists
2. Errors opening a screen. Object Required 424, Invalid property array index 381
3. Screens showing item codes (e.g., 100001) where it should display item descriptions (e.g., Female)
4. Errors creating objects (mapping, merge, list, pattern, pattern set). Invalid Key 35603. Object
variable or With block variable not set.
Explanation
The MT servers shut down due to a timeout but the Configuration Assistant Application GUI was still
running. The GUI attempts to restart the MT servers but fails to initialize all objects.
Solution
Close the Configuration Assistant GUI and launch it again, check the issue again. If the issue still exists,
note all relevant information regarding the issue and contact IMS Health.
Explanation 1
The IACE directories have expired and you are in CERTIFIED mode.
Solution 1
Make sure the latest files from Business Objects (First Logic) have been applied. Refer to section 9.5.2.1
for an alternate solution.
Explanation 2
The path variable in the operating system environment is pointing to a version other than the latest
version (which is the officially supported version).
Solution 2
Check the PATH variable to confirm if the path set is pointing to the correct officially supported version.
For example, if the value is set as,
PATH=c:\pw\v710\iacelib;c:\pw\v710\adm
Whereas the supported version of the Firstlogic libraries is, for example, 7.70c (installed in the
c:\pw\v770C directory). Update the environment variable to point to the correct version, for example:
PATH=c:\pw\v770C\iacelib;c:\pw\v770C\adm
Explanation 3
The error is due to the wrong path in the ace.cfg file or country-specific configuration file.
Solution 3
Make sure the path information in the ace.cfg file and country-specific configuration file is correct. Also
make sure that the DCT file and PATH are specified properly.
Explanation
The IACE directory files have expired.
Solution
Make sure the latest files from Business Objects (First Logic) have been applied.
Solution
A copy of the ace.cfg file should be present in the defs directory. Create a soft link so that the file can be
used from the bin directory.
On Windows 2008:
%DRTE_NP_BIN%> copy %DRTE_NP_DEFS%\ace.cfg ace.cfg
Solution
If you need the address standardization option enabled, please contact your support team.
Ace.cfg
The ace.cfg file is a required file for all processes performing address standardization. This configuration
file allows the user to specify the base directory where IACE is installed and which configuration files
need to be used, and how the data should be standardized. It consists of:
1. [ACE_BASE_DIRECTORY] Block This section contains BASE_DIRECTORY, which specifies the
base directory where ACE/IACE is installed.
2. Other Blocks There are other blocks, such as ACE_USA, IACE_USA, IACE_CANADA, IACE_UK,
IACE_BRAZIL, and IACE_ITALY. These blocks contain CONFIG_FILE_NAME, which specifies the
name and path of the configuration file to be used.
3. Contents of ace.cfg are as follows.
;
###########################################################################
############
;## NUCLEUS Pharma(TM) ACE/IACE configuration file.
;## (c) 2003 Dendrite International(R) All rights reserved
;##
;## This file is generated automatically by the configure tool
;##
;## Version "%Z% Release=%R% Frozen=%G%:%U% copied out=%H%:%T% %Z%"
;
###########################################################################
############
IMS Health Operations Guide Page 329 of 364
Rev. 1 October 1, 2015 371339087.doc
2015 IMS Health, Proprietary and Confidential Information
Nucleus 360 Version 5.2.0
Operation Instructions
[ACE_BASE_DIRECTORY]
BASE_DIRECTORY=ACE_IACE_BASE_DIR
;[DEFAULT]
;CONFIG_FILE_NAME=..\defs\ace\iace_international.cfg
;[ACE_USA]
;CONFIG_FILE_NAME=..\defs\ace\ace_usa.cfg
[IACE_USA]
CONFIG_FILE_NAME=..\defs\ace\iace_usa.cfg
;[IACE_CANADA]
;CONFIG_FILE_NAME=..\defs\ace\iace_canada.cfg
;[IACE_UK]
;CONFIG_FILE_NAME=..\defs\ace\iace_uk.cfg
;[IACE_BRAZIL]
;CONFIG_FILE_NAME=..\defs\ace\iace_brazil.cfg
;[IACE_ITALY]
;CONFIG_FILE_NAME=..\defs\ace\iace_italy.cfg
iace_usa.cfg
This file is described here as an example. It contains various blocks as given below:
ACE_FILE_STRUCTURE: This block specifies the directory and files used for standardization. All the
directories are relative to the base IACE installation directory.
ACE_COUNTRY_DETAILS: Country-specific information and the IACE engine to be used are
specified under this block.
DEFAULT_ACE_OPTIONS: This block contains options that control the behavior of IACE address
standardization and can be modified as per user requirements.
ACE_OUTPUT_COMPONENTS_RET_VAL_TYPE, ACE_OUTPUT_COMPONENTS_LENGTH:
These blocks specify the input and output line settings.
EXTERNAL_PARAMETERS_MAPPING: This is a mapping between constants used by AceInterface
library and Postalsoft IACE constants. The user should not modify it.
1. Clear the mqtest -oPURGE This will remove any unprocessed SHUTDOWN
Control Queue -dNP_CTRL_Q messages from the control queue before the
(NP_CTRL_Q) Data Import process is launched.
An exit code of
zero 0 is returned
if the control queue
is cleared
successfully;
otherwise a non-
zero value is
returned.
Dependency
Processes in this real time configuration should not be run if a process that is defined in any of the below
listed configuration is launched.
Manual Process Management (ID: 5000)
Prerequisite IDL Data Import (ID: 5100)
Manual Data Import (ID: 6000)
Manual DQT Index Maintenance (ID: 7000)
Manual Database Maintenance (ID: 8000)
Manual - Practitioner Verification Processes (ID: 13000)
1. Clear the mqtest -oPURGE This will remove any unprocessed SHUTDOWN
Control Queue -dNP_CTRL_Q messages from the control queue before the Data
(NP_CTRL_Q) Import process is launched.
An exit code of zero
0 is returned if the
control queue is
cleared successfully;
otherwise a non-zero
value is returned.
Execution
Process Command Line Comments
Sequence
Execution
Process Command Line Comments
Sequence
Execution
Process Command Line Comments
Sequence
9. Match Process. run_match.exe c=3 The run script invokes the Match
Pre-process first to generate
Exit code of zero 0 is returned if candidates for
Match process completes NP_MATCH_PENDING profiles.
successfully; otherwise non-zero Only one copy of the pre-process
value is returned.. will be run.
If pre-process completes
successfully then three instances
of the match process
(npmatch.exe) will be launched in
parallel.
Execution
Process Command Line Comments
Sequence
Execution
Process Command Line Comments
Sequence
Execution
Process Command Line Comments
Sequence
20. Post Reverse Data Monitor runstoredproc.exe A log file of the format
DB Maintenance. -s"PKG_Db_Maintenance.Process_ PKG_Db_Maintenance.Proces
Maintenance(PROCESS_ID=>17)" s_Maintenance_YYYYMMDD_
HHMISS.log will be created in
Exit code of zero 0 is returned if the directory that is set in
database maintenance process DRTE_NP_LOGS environment
completes successfully; otherwise variable.
non-zero value is returned.
Execution
Process Command Line Comments
Sequence
23. Unlock Profiles that are runstoredproc.exe A log file of the format
locked in the -s"NPStgImpProfileMgr.DeleteLocke NPStgImpProfileMgr.DeleteLoc
STG_IMPORTED_PROFIL dProfiles" kedProfiles_YYYYMMDD_HHM
E table. ISS.log will be created in the
Exit code of zero 0 is returned if directory that is set in
staging table profiles are unlocked DRTE_NP_LOGS environment
successfully; otherwise non-zero variable.
value is returned.
Execution
Process Command Line Comments
Sequence
Dependency
Processes in this configuration should not be run in parallel. Processes in this configuration should not be
run if a process that is defined in any of the below listed configuration is launched.
Real Time Data Import (ID: 1000)
Real Time Priority Data Import (ID: 1100)
Real Time Data Processing Services (ID: 3000)
Manual Data Import (ID: 6000)
Manual DQT Index Maintenance (ID: 7000)
Manual Database Maintenance (ID: 8000)
Manual - Practitioner Verification Processes (ID: 13000)
Execution
Process Command Line Comments
Sequence
Execution
Process Command Line Comments
Sequence
Execution
Process Command Line Comments
Sequence
If the pre-process
completes successfully then
three instances of the
match process
(npmatch.exe) will be
launched in parallel.
Number copies to be
launched can be controlled
by changing the value set in
-c option.
Execution
Process Command Line Comments
Sequence
Execution
Process Command Line Comments
Sequence
16. Launch the Merge run_merge_extract.exe -r=<n> A log file of the format
process to identify npmerge_<MergeType>_<
Customer merges. Substitute <n> with the Merge n>_YYYYMMDDHHMISS.l
Rule ID stored in NP_RULE table. og will be created in the
directory that is set in
An exit code of zero 0 is DRTE_NP_LOGS
returned if the Merge process environment variable.
completes successfully; otherwise
a non-zero value is returned.
17. Launch the Merge run_merge_extract.exe -r=<n> -a A log file of the format
process to identify npmerge_<MergeType>_<
Address merges. Substitute <n> with the Merge n>_YYYYMMDDHHMISS.l
Rule ID stored in NP_RULE table. og will be created in the
directory that is set in
An exit code of zero 0 is DRTE_NP_LOGS
returned if the Merge process environment variable.
completes successfully; otherwise
a non-zero value is returned.
Execution
Process Command Line Comments
Sequence
19. Validation Status Export runstoredproc.exe -s" A log file of the format
NPCMCustVerStatusExport_Cust NPCMCustVerStatusExpo
om.main" rt_Custom.main_YYYYM
MDD_HHMISS.log will be
An exit code of zero 0 is created in the directory that
returned if the export process is set in DRTE_NP_LOGS
completes successfully; otherwise environment variable.
a non-zero value is returned.
22. Unlock Profiles that are runstoredproc.exe A log file of the format
locked in the -s"NPStgImpProfileMgr.DeleteLoc NPStgImpProfileMgr.Dele
STG_IMPORTED_PRO kedProfiles" teLockedProfiles_YYYYM
FILE table. MDD_HHMISS.log will be
An exit code of zero 0 is created in the directory that
returned if the staging table is set in DRTE_NP_LOGS
profiles are unlocked environment variable.
successfully; otherwise a non-
zero value is returned.
Execution
Process Command Line Comments
Sequence
Old File:
%DRTE_THIRD_PARTY
%\np\in\dea\DEA<*>.old
Old File:
%DRTE_THIRD_PARTY
%\np\in\ama\AMA<*>.old
Old File:
%DRTE_THIRD_PARTY
%\np\in\aoa\AOA<*>.old
Execution
Process Command Line and Return Code Comments
Sequence
Execution
Process Command Line and Return Code Comments
Sequence
Execution
Process Command Line and Return Code Comments
Sequence
Execution
Process Command Line and Return Code Comments
Sequence
Execution
Process Command Line and Return Code Comments
Sequence
3. Updating the Xtelligent Export Adapter configuration file for changes in Output
Queue Name and WebSphere MQ server connection details
Locate the np_is_export_config.xml configuration file in the <np_root>\np\<Adapter Deployment
Directory>\< Deployment Environment >\DataInterfaces\Interfaces\Configuration folder.
Example: <np_root>\np\is\prod\DataInterfaces\Interfaces\Configuration
Shut down the nponepoint.exe application and any active jobs managed by it before making any changes.
The changes will take effect only after restarting the nponepoint.exe application.
Locate the <MQ> node in the file. Under the <MQ> node change the value of
<Owner> attribute with the new QueueManager Name
<mqchanneldefinition> attribute with the new channel name and the WebSphere Server IP Address
with the Port Number within circular braces
<Queue qid="WSMQ_OUT_Q"> attribute for changes in the Output Queue Name.
The highlighted placeholders in the following section need to be updated with actual values in
np_is_export_config.xml file.
Note : For default queue manager, port number is not required to be specified explicitly.
<MQ>
<QProcessor>MQListener.QProcessor</QProcessor>
<Owner>QueueManager</Owner>
<mqchanneldefinition>Channel Name/TCP/<WebSphere MQ Server IP Address
(PortNumber)</mqchanneldefinition>
<MessageType/>
<QueueNames>
<Queue qid="WSMQ_OUT_Q">Output Queue</Queue>
</QueueNames>
</MQ>
4. Updating the Xtelligent Import Adapter configuration file for changes in Refresh
Request, Exception queue names and Websphere MQ Server connection details
Locate the np_is_import_config.xml configuration file in the configuration file in the
<np_root>\np\<Adapter Deployment Directory>\< Deployment Environment
>\DataInterfaces\Interfaces\Configuration folder.
Example: <np_root>\np\is\prod\DataInterfaces\Interfaces\Configuration
Shut down the nponepoint.exe application and any active jobs managed by it before making any changes.
The changes will take effect only after restarting the nponepoint.exe application.
Locate the <MQ> node in the file. Under the <MQ> node change the value of
<Owner> attribute with the new QueueManager Name
<mqchanneldefinition> attribute with the new channel name and WebSphere Server IP Address
with the Port Number within circular braces.
<Queue qid="WSMQ_REF_REQ_QUEUE_ID"> attribute for changes in the Refresh Request Queue
Name
<Queue qid="WSMQ_X_QUEUE_ID"> attribute for changes in the Exception Queue Name.
The highlighted placeholders in the following node need to be updated with actual values in
np_is_import_config.xml file.
Note: For default queue manager, port number is not required to be specified explicitly.
<MQ>
<QProcessor>MQListener.QProcessor</QProcessor>
<Owner>QueueManager</Owner>
<mqchanneldefinition>Channel Name/TCP/<WebSphere MQ Server IP Address
(PortNumber)</mqchanneldefinition>
<MessageType/>
<QueueNames>
<Queue qid="WSMQ_REF_REQ_QUEUE_ID">Refresh Request Queue</Queue>
<Queue qid="WSMQ_X_QUEUE_ID">Exception Queue</Queue>
</QueueNames>
</MQ>
APPROVALS
Your signature indicates this document was reviewed and verified for accuracy and thoroughness and
that you approve the contents of the document.
Your electronic record of approval indicates this document was reviewed and verified for accuracy and
thoroughness and that you approve the contents of the document.
Title and Name
Senior Development Manager, Technology Solutions
Shankar Kanagaraj