Sunteți pe pagina 1din 116

Technical Best Practices

Oracle Utilities Application Framew ork


OR AC L E W HI TE P AP E R | DE CE M BE R 2 01 6 ( 4. 3. 0. 3. 0 )
Table of Contents

Introduction 1

Caveat 1

Conventions used in this whitepaper 1

Introduction 2

Background of Oracle Utilities Application Framework 2

Architectural Changes 3

Installation Best Practices 4

Read the Installation Guide 4

Ensure the prerequisites are installed 4

Environment Practices 4

Using multiple administrators 5

Checking Java Installation 6

Location of Installation Logs 7

XML Parser Errors in installation 7

AppViewer cannot Co-Exist in Archive Mode 8

Implementing Secure Protocols (https/t3s) 8

Disabling protocols 10

Using Oracle WebLogic Domain Templates 11

General Best Practices 11

Limiting production Access 11

Regular Collection and Reporting of Performance Metrics 12

Respecting Record Ownership 12

Technical Best Practices - Oracle Utilities Application Framework


Backup of Logs 13

Post Process Logs 13

Check Logs For Errors 14

Optimize Operating System Settings 14

Optimize connection pools 14

Read the available manuals 15

Technical Documentation Set Whitepapers available 16

Using Automated Test Tools 19

Custom Environment Variables or JAR files 19

Secure default userids 20

Consider different userids for different modes of access 20

Don’t double audit 20

Use Identical Machines 21

Regularly restart machines 21

Avoid using direct SQL to manipulate data 21

Minimize Portal Zones not used 22

Routine Tasks for Operations 22

Typical Business Day 23

Login Id versus Userid 23

Hardware Architecture Best Practices 24

Single Server Architecture 24

Simple Multi-tier Architecture 25

Multiple Web Application Servers 26

High Availability Architecture 27

1 - Technical Best Practices - Oracle Utilities Application Framework


Failover Best Practices 28

Online and Batch tracing and Support Utilities 29

General Troubleshooting Techniques 29

Overriding System Date 31

System Wide 31

User Specific Date 32

Transaction Timeouts 32

Configuration of timeouts 33

Implementing guidelines 34

Java Garbage Collection Guidelines 35

Security Configuration 36

Shortcuts When Processing Templates 36

Using Oracle JRockit 37

Accessing JMX for Oracle WebLogic 38

UCP JMX Interface 39

Using Java Mission Control with Oracle Utilities Application Framework 41

Overload Protection 42

Resource Management 43

Data Management Best Practices 43

Respecting Data Diversity 44

Information Lifecycle Management 44

Data Retention Guidelines 45

Removal of Staging Records 46

2 - Technical Best Practices - Oracle Utilities Application Framework


Partitioning 47

Compression 48

Database Clustering 48

Backup and Recovery 49

Client Computer Best Practices 49

Make sure the machine meets at least the minimum specification 49

Internet Explorer Compatibility Mode Settings 50

Popup Blockers 50

Internet Explorer Caching Settings 50

Clearing Internet Explorer Cache 51

Optimal Network Card Settings 51

Network Best Practices 51

Network bandwidth 51

Ensure legitimate Network Traffic 52

Regularly check network latency 52

General Networking Guidelines 53

Web Application Server Best Practices 53

Make sure that the access.log is being created 53

Examine Memory Footprint 54

Turn off Debug 54

Load balancers 55

Preload or Not? 56

Native or Product provided utilities? 57

3 - Technical Best Practices - Oracle Utilities Application Framework


Hardware or software proxy 57

What is the number of Web Application instances do I need? 57

Configuring the Client Thread Pool Size 58

Defining external LDAP to the Web Application Server 60

Appropriate use of AppViewer 61

Fine Grained JVM Options 61

Customizing the server context 63

Clustering or Managed? 63

Clustering and Environmental configuration settings 65

Allocate port numbers appropriately 65

Monitoring and Managing the Web Application Server using JMX 66

Enabling autodeployment for Oracle WebLogic console 67

Password Management solution for Oracle WebLogic 68

Error configuring Oracle WebLogic credentials 68

Corrupted SPLApp.war 69

Web Application Server Logs 69

Enabling additional Java options 70

Using setUserOverrides.sh for Oracle WebLogic 12c 70

Native vs Embedded Oracle WebLogic Mode 71

CLIENT-CERT Support 73

Implementing Work Managers 75

Implementing Multiple Environments In A Single Domain 76

Business Application Server Best Practices 78

4 - Technical Best Practices - Oracle Utilities Application Framework


Cache Management 78

Using JMX with the Business Application Server 78

Replicating the txrpt statistics 79

Database Connection Management 81

XPath Memory Management 81

Enabling Service Timings 82

Oracle WebLogic Datasource Support 83

Database Best Practices 88

Regularly Calculate Database Statistics 88

Ensure I/O is spread evenly across available devices 89

Use the Correct NLS settings (Oracle) 89

Monitoring database connections 90

Consider changing Bit Map Tree parameter 92

OraGenSec command line Parameters 92

Building the Data Model 92

Why is there no referential integrity built into the database? 93

Building the Data Model 93

Function to join 93

Script to Create Constraints 94

Configuring Real Application Cluster Support 97

Using Database Compression 98

Monitoring Best Practices 98

Product Monitoring Capabilities 98

5 - Technical Best Practices - Oracle Utilities Application Framework


Using Oracle Enterprise for Monitoring 100

Appendix 100

Checking COBOL Installation 100

COBOL License Errors in Batch 101

Writing Files Greater than 4GB 102

Number of Child JVMS 102

COBOL Memory management 103

Killing Stuck Child JVM's 104

6 - Technical Best Practices - Oracle Utilities Application Framework


Introduction
This white paper outlines the common and best practices used by IT groups at sites using Oracle Utilities
Application Framework based products and Oracle internal studies, around the world, that have benefited sites in
some positive way. This information is provided to guide other sites in implementing or maintaining the product.

While all care has been taken in providing this information, implementation of the practices outlined in this document
may NOT guarantee the same level of (or any) improvement. Some of these practices may not be appropriate for
your site. It is recommended that each practice be examined in light of your particular organizational policies and
use of the product. If the practice is deemed beneficial to your site, then consider implementing it. If the practice is
not appropriate (e.g. for cost and other reasons), then it should not be considered.

Note:
For publishing purposes, the word product will be used to be indicate all Oracle Utilities Application
Framework based products.

Note:
This whitepaper has been updated to include advice for all versions of Oracle Utilities Application
Framework V4.x. This document does not include any advice for Oracle Utilities Application Framework
V2.x.

Caveat
While all care has been taken in providing this information, implementation of the practices outlined in this document
may NOT guarantee the same level of (or any) improvement. Not all practices outlined in this document will be
appropriate for your site. It is recommended that each practice be examined in light of your particular organizational
policies and use of the product. If the practice is deemed beneficial to your site, then consider implementing it. If the
practice is not appropriate (e.g. for cost and other reasons), then it should not be considered.

Conventions used in this whitepaper


The advice in this document applies to any product based upon Oracle Utilities Application Framework versions 2.1
and above. Refer to the installation documentation to verify which version of the framework applies to your version
of the product. For publishing purposes the specific facilities and instructions for specific framework versions will be
indicated with icons:

Advice or instructions marked with this icon apply to Oracle Utilities Application Framework
V4.0 based products and above.

Advice or instructions marked with this icon apply to Oracle Utilities Application Framework
V4.1 based products and above.

Advice or instructions marked with this icon apply to Oracle Utilities Application Framework
V4.2.0.0.0 based products and above.

Advice or instructions marked with this icon apply to Oracle Utilities Application Framework
V4.3.0.0.0 based products and above.

1 | Technical Best Practices - Oracle Utilities Application Framework


Note:
Advice in this document is primarily applicable to the latest version of the Oracle Utilities Application
Framework at time of publication. Some of this advice may apply to other versions of the Oracle Utilities
Application Framework and may be applied at site discretion.

Note:
In some sections of this document the environment variable $SPLEBASE (or %SPLEBASE%)
%SPLEBASE% is used. This
denotes the root location of the product install. Substitute the appropriate value for the environment used at
your site.

Introduction
Implementation of the product at any site introduces new practices into the IT group to maintain the health of the
system and provide the expected service levels demanded by the business. While configuration of the product is
important to the success of the implementation (and subsequence maintenance), adopting new practices can help
ensure that the system will operate within acceptable tolerances and support the business goals.

This white paper outlines some common practices that have been implemented at sites, around the globe, that have
proven beneficial to that site. They are documented here so that other sites may consider adopting similar practices
and potentially deriving benefit from them as well.

The recommendations in this document are based upon experiences from various sites and internal studies, which
have benefited from implementing the practices outlined in the document.

Background of Oracle Utilities Application Framework


The Oracle Utilities Application Framework is a reusable, scalable and flexible java based framework which allows
other products to be built, configured and implemented in a standard way.

When Oracle Utilities Customer Care & Billing was migrated from V1 to V2, it was decided that the technical aspects
of that product be separated to allow for reuse and independence from technical issues. The idea was that all the
technical aspects would be concentrated in this separate product (i.e. a framework) and allow all products using the
framework to concentrate on delivering superior functionality. The product was named the Oracle Utilities
oufw is the product code). The framework was then used across existing and new products
Application Framework (oufw
to support a common technology platform across Oracle Utilities.

The technical components are contained in the Oracle Utilities Application Framework which can be summarized as
follows:
» Metadata – The Oracle Utilities Application Framework is responsible for defining and using the metadata to
define the runtime behavior of the product. All the metadata definition and management is contained within the
Oracle Utilities Application Framework.
» UI Management – The Oracle Utilities Application Framework is responsible for defining and rendering the pages
and responsible for ensuring the pages are in the appropriate format for the locale.
» Integration – The Oracle Utilities Application Framework is responsible for providing the integration points to the
architecture. Refer to the Oracle Utilities Application Framework Integration Overview (Doc Id: 789060.1)
whitepaper available from My Oracle Support for more details.

2 - Technical Best Practices - Oracle Utilities Application Framework


» Tools – The Oracle Utilities Application Framework provides a common set of facilities and tools that can be used
across all products.
» Technology – The Oracle Utilities Application Framework is responsible for all technology standards compliance,
platform support and integration.
The figure below summarizes some of the facilities that the Oracle Utilities Application Framework provides:

Meta Data UI Management Integration Tools Technology

» Layout » Zones » Inbound Web Services » Dictionary » J2EE


» Personalization » Portal » REST » To Do (Exceptions) » JAX-WS
» Scripting » Language » Outbound Messages » Security » Database Connectivity
» Roles » Locale » Staging » Auditing » SOA
» Rules » BPA Scripting » Web Services Import » ConfigTools » XPath
» Language » UI Maps » OSB Integration » Algorithms » XQuery
» Localization » SOA Integration » Scripting
» Business Services
» Business Objects
» Maintenance Objects
» DB Structure

There are a number of products from the Utilities Global Business Unit as well as from the Public Services Unit and
Financial Services Global Business Unit that are built upon the Oracle Utilities Application Framework. These
products require the Oracle Utilities Application Framework to be installed first and then the product itself installed
onto the framework to complete the installation process.

There are a number of key benefits that the Oracle Utilities Application Framework provides to these products:
» Common facilities – The Oracle Utilities Application Framework provides a standard set of technical facilities
that mean that products can concentrate in the unique aspects of their markets rather than making technical
decisions.
» Common methods of configuration – The Oracle Utilities Application Framework standardizes the technical
configuration process for a product. Customers can effectively reuse the configuration process across products.
» Common methods of implementation - The Oracle Utilities Application Framework standardizes the technical
aspects of a product implementation. Customers can effectively reuse the technical implementation process
across products.
» Quicker adoption of new technologies – As new technologies and standards are identified as being important
for the product line, they can be integrated centrally benefiting multiple products.
» Multi-lingual and Multi-platform - The Oracle Utilities Application Framework allows the products to be offered
in more markets and across multiple platforms for maximized flexibility
» Cross product reuse – As enhancements to the Oracle Utilities Application Framework are identified by a
particular product, all products can potentially benefit from the enhancement.

Note:
Use of the Oracle Utilities Application Framework does not preclude the introduction of product specific
technologies or facilities to satisfy markets. The framework minimizes the need and assists in the quick
integration of a new product specific piece of technology (if necessary).

Architectural Changes
Over the last few releases of the Oracle Utilities Application Framework the architecture has been optimized to take
advantage of the latest technological advances, provide flexibility and support varying deployment models. The
architectural changes over the last few releases include:

3 - Technical Best Practices - Oracle Utilities Application Framework


» IWS replaces XAI – The Inbound Web Services capability which houses web services in the Oracle WebLogic
domain rather than within the product itself, which was the architecture of XML Application Integration (XAI). This
allows greater flexibility in configuration and support for advanced security using Oracle Web Services Manager.
For more information refer to Migrating from XAI to IWS (Doc Id: 1644914.1) available from My Oracle Support.
» OSB replaces MPL – The Multi-purpose Listener capability was a limited service bus which has since been
replaced with an adapter to Oracle Service Bus. This allows greater implementation flexibility, increased
performance and a wide range of source and target technology implementations. For more information refer to
Oracle Service Bus Integration (Doc Id: 1558279.1) available from My Oracle Support
» Simpler Architecture – In past releases, it was possible to separate the Web Application Server and Business
Application Server. This technique was not popular with implementations due to the increased configuration
complexity and complications when patching. Based upon feedback from customers and partners the Web
Application Server and Business Application Server have been architecturally combined to support a more
streamlined and less costly architecture emphasizing different channels representing different users or modes of
access to the product.
» Channel Based Architecture – Over the last few releases, the architecture has been moving towards a channel
based architecture where the channels into the product are able to be separated from a configuration and
architectural point of view. This allows each channel to be segregated, managed, monitored and optimized for the
traffic within that channel. At the moment, online, web services and batch are on separate channels. More
channels will be added in progressive releases.

Note:
The advice in this whitepaper will cover the architectural principles outlined above.

Installation Best Practices


During the initial phases of an implementation, a copy of the product will need to be installed. During the
implementation a number of copies of additional copies will be installed, including production. This section outlines
some practices that customers have used to make this process smooth.

Read the Installation Guide


One of the most important pieces of advice in this document to implement is to read the installation guide that is
supplied with the product. It provides valuable information about what needs to be installed and configured as well
the order of the installation. Failure to follow the instructions can cause unnecessary delays to the installation.

If you are upgrading to a new version, read the new installation guide as well as it will contain instructions on how to
upgrade to the new version as well as details of what has been changed in the new version.

Ensure the prerequisites are installed


When installing there is a number of third party prerequisite software that must be obtained (i.e. downloaded) prior to
the actual installation of product software can commence. Read the Installation Guide and Quick Installation Guide
to download and install the prerequisite software prior to installing product.

Note:
For customers who are upgrading, the installation of product and its related third party software is designed
so that more than one version of product can co-exist.

Environment Practices

4 - Technical Best Practices - Oracle Utilities Application Framework


Note:
There is a more detailed discussion of effective Environment Management in the Environment Management
document of the Software Configuration Management Series (Doc Id: 560401.1) whitepapers available from
My Oracle Support. Refer to that document for further advice.

When installing product at a site, each copy of product is regarded as an environment to perform a particular task or
group of tasks. Typically, without planning this can lead to a larger than anticipated number of environments. This
can have a possible negative flow on effect by increasing overall maintenance effort and increasing resource usage
(hardware and people), which may in turn cause delays in implementations. Customers to minimize the impact of
environments on their implementations have used the following advice:

» At the start of the implementation decide the number of environments to use. Keep this to a minimum and
consider sharing environments between tasks. Another technique associated with this is to specify an end date
for each environment. This is the date the environment can be removed from the implementation. This can force
rethinks on the number of environments that are to be used at an implementation and may force sharing.
» For each environment, consider the impact on the hardware and maintenance effort including the following:
» The time and resources it takes to install the environment.
» The time and resources it takes to keep the environment up to date including application of single fixes,
rollups/service packs and upgrades. Do not forget application and management of customization builds.
» The time and resources to maintain the configuration migration and information lifecycle management
facilities for multiple environments, if used at an implementation. This includes the setup and regular
migrations that will be performed.
» The time and resources it takes to backup and restore environments on a regular basis. In some
implementations, having different backup schemes for environments based upon tasks and update
frequency for that environment, i.e. more updated = more frequent backup, may provide some savings.
» The time and resources to manage the disk space for each environment including regular cleanups.
» Environments may be setup so that the database can be reduced to a single database instance with each
environment having a different schema/owner. This will reduce the memory footprint of the DBMS on the machine
but may reduce availability of the database instance is shut down (all environments are affected). For non-
production, most customers create a database instance for each environment or use pluggable databases in
Oracle 12c and above.

Using multiple administrators


By default, when installing product a single administrator account (usually referred to as splsys)
splsys is used to install
and own the product. This is the default behavior of the installation and apart from specifying a different userid than
splsys, it is possible to use other userids to own all or individual environments.
the default splsys

For example, if the conversion team wishes to have the ability to start, stop and monitor their own environments, you
can create another administrator account and install their copies of product using that userid. This allows the
conversion team to control their own environments. If you did not have the ability to use multiple administrators than
they may have access to all environments (as you would have to give them access to the splsys account).

One of the advantages of this approach is that you can delegate management of a copy product to other teams
without compromising other environments. Another advantage is that you can quickly identify UNIX resource
ownership by user rather than trying using other methods.

The only disadvantage is that to manage all copies of product you will need to logon to the additional administration
accounts that own the various copies.

5 - Technical Best Practices - Oracle Utilities Application Framework


Checking Java Installation
Note:
For Oracle Utilities Application Framework V4.1 and V4.2 , it is possible to use two differing
Java Virtual Machine versions if COBOL is used as it is possible to configure the CHILD_JVM_JAVA_HOME
separately. If this is the case then repeat this process for the CHILD_JVM_JAVA_HOME JVM.

Note:
For Oracle Utilities Application Framework V4.3 and above, only one JDK is supported across the
architecture. Refer to the Installation Guide supplied with your product for versions supported.

When the product is installed one of the first perquisites to be verified is the version of Java installed and referenced
using the environment variable $JAVA_HOME (or %JAVA_HOME% on Windows). Whilst the product checks this
version it can be checked manually prior to installation (and at any time) using the following commands:

$JAVA_HOME/bin/java –version
Or (on Windows):

%JAVA_HOME%\
%JAVA_HOME%\bin\
bin\java –version
For example:

Linux:

#> $JAVA_HOME/bin/java -version

java version "1.X.X_xx"


Java(TM) SE Runtime Environment (build 1.X.0_xx-xxx)
Java HotSpot(TM) 64-Bit Server VM (build xx.xxxxxx, mixed mode)

AIX:

#> $JAVA_HOME/bin/java -version

java version "1.X.0"


Java(TM) SE Runtime Environment (build xxxxxxxx)
IBM J9 VM (build X.X, JRE 1.X.0 IBM J9 X.X AIX ppc64-64 XXXXX (JIT enabled, AOT
enabled)
J9VM - XXXXXXX
JIT - XXXXXXX
GC - XXXXXXX
JCL - XXXXXXX

Windows:

C:\> %JAVA_HOME%\
%JAVA_HOME%\bin\
bin\java -version

java version "1.X.X"

6 - Technical Best Practices - Oracle Utilities Application Framework


Java(TM) SE Runtime Environment (build 1.X.0_XXXX)

Java HotSpot(TM) 64-Bit Server VM (build XX.XX-XXX, mixed mode)

Note:
Verify the java version number and operating mode (32/64 bit) against the Quick Installation Guide provided
with the product.

Location of Installation Logs


When installing the product a log file is written for each component installed. Oracle Utilities Application Framework
is a component of the installation, the product install is a separate installation component.

The log contains all the messages pertaining to the installation process including any error messages for installation
errors encountered. The log is located in the directory the installation was initiated from and the name is in the
format:

install_<product>_<environment>.log
Where:

<product> Product code of the product component you are installing. For example, FW = Oracle
Utilities Application Framework

<environment> Name of the environment that is being installed.

Check this log for any error messages during the installation process.

XML Parser Errors in installation


The Oracle Client is used by the installers and utilities to provide access to the Perl runtime and associated libraries
used by the installer and utilities. This is the first configuration question in the installation process.

The Oracle Client can be installed (if the product is not installed on a machine containing the Oracle Database
software) or an existing ORACLE_HOME can be specified if the Oracle Database software is installed already on the
machine (as it contains the Oracle Client in the installation). The value is stored in the ENVIRON.INI as the value
for parameter ORACLE_CLIENT_HOME.
ORACLE_CLIENT_HOME

Note:
In some versions of Oracle Utilities Application Framework, the 32 bit client MUST be also installed for use
with the database installation utilities. In Oracle Utilities Application Framework 4.3.0.4.0 and
above, the 64 bit database client is used.

If the Oracle Client or ORACLE_HOME is invalid then the following error will be returned by the installation utilities
(and other installs):

Can't locate XML/Parser.pm in @INC (@INC contains: …

BEGIN failed--compilation aborted at


data/bin/perllib/SPL/splXMLParser.pm line 3.
Compilation failed in require at data/bin/perllib/SPL/splExternal.pm
line 10.

7 - Technical Best Practices - Oracle Utilities Application Framework


BEGIN failed--compilation aborted at
data/bin/perllib/SPL/splExternal.pm line 10.
Compilation failed in require at install.plx line 25.

BEGIN failed--compilation aborted at install.plx line 25.

Error: install.plx didn't finish successfully. Exiting.


Ensure that the ORACLE_CLIENT_HOME includes the perl subdirectory to rectify this issue.

AppViewer cannot Co-Exist in Archive Mode


The Application Viewer is an optional component that provides a meta data viewer for data dictionary, batch
controls, to do types, javadoc etc. It is primarily designed for use by the developers and key architects at your site1.

If the site decides to move between expanded mode2 to archive mode (or vice versa) on Oracle WebLogic
installations then when executing initialSetup[.sh] the product may report the following error:

AppViewer.war cannot co exist with AppViewer directory


For archive mode the AppViewer.war is required and for expanded mode the AppViewer directory is used. The
error message indicates both exist. This can occur when the expanded mode is changed and the
initialSetup[.sh] utility. To resolve this issue, depended on the value of WEB_ISEXPANDED parameter, the
following recommended:

WEB_ISEXPANDED Value Comments

true Remove or rename AppViewer.war

false Remove or rename AppViewer directory.

Implementing Secure Protocols (https/t3s)


Note:
For customers using Oracle Utilities Application Framework V4.1 and above the use of secure
protocol can be enabled by specifying a HTTPS port using the configureEnv[.sh] –a utility and
specifying a port number under WebLogic SSL Port Number.

Note:
Some of the instructions below recommend changes to individual configuration files. These manual
changes may be overridden by executions of the initialSetup[.sh] utility back to the product defaults.
To retain the changes across invocations of the initialSetup[.sh] utility it is recommended to use
custom templates and/or configuration file user exits. Refer to the Server Administration Guide for more
details of implementing custom templates and/or configuration file user exits.

1 Generally customers do not implement the AppViewer in production.


2 Expanded mode is only available for Oracle WebLogic and Oracle Utilities Application Framework V4.0 and above.

8 - Technical Best Practices - Oracle Utilities Application Framework


By default, all transmission of data is using the http and/or t33 protocol between the various tiers of the product.
Whilst this default situation is sufficient for the vast majority of customers, some sites wish to implement the secure
versions of these protocols for use with the product. The reason for their use is typically to encrypt all transmission of
data from the client to the server and within the server tiers themselves.

Note:
Enabling https or t3s may result in higher resource usage due to the resource requirements to encrypt and
decrypt data. The extent of the resource usage will vary from platform to platform. It is advised that
customer compare performance between secure and non-secure protocols before committing to secure
protocols.

To implement the more secure protocol requires a number of changes and additional facilities to be enabled. The
process below outlines the generic process for implementing the secure protocol:

» Obtain a digital certificate or generate a certificate from keytool for your organization from a trusted certificate
authority. This is used for the encryption/decryption of data using the protocol.

Note:
The certificate provided with the J2EE Web Application Server installation is to be used for demonstration
purposes only. It is highly recommended that alternative certificate be used for production environments.

» Configure J2EE Web Application Server SSL support to use the certificate as outlined in the documentation sites
outlined below4:

Web Application Server Reference

Oracle WebLogic 10.3.6 Configuring SSL

Oracle WebLogic 12.1.x Overview of Configuring SSL in WebLogic Server

Oracle WebLogic 12.2.x Overview of Configuring SSL in WebLogic Server

» Enable the HTTPS port on your environment using the console provided with your J2EE Web Application Server.
Remember to reference the certificate you processed in the previous step.

Note:
For customers using Oracle WebLogic on Oracle Utilities Application Framework V4.1 and
above the setting for WebLogic SSL Port Number will enable this facility without the need of the console.

Note:
If changes are made to the console then to retain the change across upgrades and service packs it is
recommended to use custom templates or user exits to retain the setting. Refer to the Server
Administration Guide for more details of implementing custom templates.

» Examine the $SPLEBASE/etc/conf directory (or %SPLEBASE%\ %SPLEBASE%\etc\


etc\conf on Windows), unless otherwise
indicated, for configuration files that use the protocol:
3 The t3 protocol is only used for sites that have separated the Web Application and Business Application tiers using the Oracle WebLogic platform on
selected versions of the Oracle Utilities Application Framework. The iiop protocol is used for the same scenario but for IBM WebSphere platforms.
4 For Oracle WebLogic customers, refer to the section Configuring Identity and Trust and Roadmap for Securing WebLogic Server for the additional
steps and facilities.

9 - Technical Best Practices - Oracle Utilities Application Framework


Configuration File Changes

spl.properties Change references to the t3 protocol to t3s, if exists


Change references to the http protocol to https with the SSL port replacing the HTTP ports

web.xml Change references to the http protocol to https with the SSL port replacing the HTTP ports

web.xml.<channel> Change references to the http protocol to https with the SSL port replacing the HTTP ports

ejb-
ejb-jar.xml Change references to the http protocol to https with the SSL port replacing the HTTP ports. This file is
located under $SPLEBASE/splapp/businessapp/config/META-
$SPLEBASE/splapp/businessapp/config/META-INF (or
%SPLEBASE%
SPLEBASE%\
EBASE%\splapp\
splapp\businessapp\
businessapp\config\
config\META-
META-INF on Windows)

Note:
If these files are changed they may revert to the product template versions across service packs and
upgrades. To retain change across service packs and upgrades it is advised to use custom templates
and/or user exits. Refer to the Server Administration Guide supplied with your product for more details.

» Shutdown the J2EE Web Application Server to prepare to reflect the changes.
» Run the initialSetup[.sh] –w command to reflect the changes into the server files.
» Restart the J2EE Web Application Server.
» Ensure that any Feature Configuration options using the product browser that use the HTTP protocol as part of
their options are also converted to HTTPS and the appropriate port number. Use the Feature Configuration menu
option to check each of them. The Features will vary from product to product and version to version.
» Ensure that any Message JNDI Server provider URLS using the product browser that use the http/t3 protocol as
part of their options are also converted to https/t3s and the appropriate port number.
» Any customization that refers to the HTTP protocol such as custom algorithms or service scripts must also be
converted from HTTP to HTTPs. Refer to the Java Secure Socket Extension Reference Guide for more
information

Disabling protocols
Note:
For general advice on securing you production system, refer to Fusion Middleware Securing a Production
Environment for Oracle WebLogic Server.

If you are considering using secure protocols then you may want to disable non-secure protocols (as by default both
can be used). This would require configuration on the J2EE Web Application server to disable protocol that should
not be used.

Note:
The following instructions outline changes to configuration files used by the J2EE Web Application Server
that can be made manually to the configuration files supplied with the product or via the relevant
administration console supplied with the J2EE Web Application Server.

For Oracle WebLogic the following changes need to be made:

» Refer to the Configuring SSL section of the Oracle WebLogic documentation to be familiar with SSL. SSL needs
to be configured and verified for all access modes (including the Administration console) before disabling HTTP.

10 - Technical Best Practices - Oracle Utilities Application Framework


» Disable the Listen Port (non-SSL) using the facilities provided by the version of the J2EE Web Application Server.
» If the HTTP protocol supports allows individual methods to be disabled, it is recommended to disable PUT,
PUT
DELETE and CONNECT at a minimum.

Note:
The HTTP methods described above are disabled automatically in Oracle Utilities Application Framework
V4.1 Group Fix 4 ( ) and above.

Note:
Please check that the administration console used at your site does NOT require the POST method for
POST.
HTTP before also disabling POST

Using Oracle WebLogic Domain Templates


Note:
The domain templates described in this section are only available for the Linux platform at the current
release.

In Oracle Utilities Application Framework 4.3.x and above, a number of Oracle WebLogic domain templates have
been provided to simplify the creation of the Oracle WebLogic domain for the installation. These templates can be
used with the Domain Creation wizard supplied with Oracle WebLogic. The templates are located in the
$SPLEBASE/tools/domaintempates. The following templates are supplied:
$SPLEBASE/tools/domaintempates

Template Usage

Simple A simple template with a single server housing administration and the product. This template is suitable to non-production
environments.

Complex A more complex template with a simple cluster for the product and separate administration server. This template is
suitable for an initial setup for a production system. Once established it is recommended to use the Oracle WebLogic
console to extend the domain according to your individual requirements.

The name of the domain templates adhere to the following naming convention:

Oracle-Utilities-<template>-Linux-<version>.jar
Where
<template> Simple or Complex)
Domain Template type (Simple Complex

<version> The version of Oracle WebLogic the template is optimized for. Ensure the correct version is used to ensure
optimization.

The use of domain templates simplifies the native installation process as outlined in Native Installation Oracle
Utilities (Doc Id: 1544969.1) available from My Oracle Support and the Installation Guide supplied with the product.

General Best Practices


This section outlines some general practices that have been successfully implemented at various product sites.

Limiting production Access

11 - Technical Best Practices - Oracle Utilities Application Framework


One of the guiding principles at successful sites is that production access is restricted to the processing necessary
to run their business. This means that other non-mainstream work, such as ad-hoc queries, are either very limited or
NOT performed on production at all. This may sound logical but a few sites have allowed access to production from
inappropriate sources, which has had an adverse impact on performance.

For example, it is not appropriate to allow people access to the production database through ad-hoc query tools (i.e.
such as SQL Developer, SQL*Plus etc). The freestyle nature of these tools can allow a single user to wreak havoc
on performance with a single inefficient SQL statement.

The database is not optimized for such unexpected traffic. Removal of this potentially inefficient access can typically,
improve performance.

Regular Collection and Reporting of Performance Metrics


One of the major practices that successful customers perform is the regular collection of performance statistics,
analysis of the statistics and reporting pertinent information to relevant parties within the organization as well as
Oracle. Collection of such information can help identify bottlenecks and badly performing transactions, as well as
help understand how the product is being used at your site. They offer proof of both good and bad performance and
typically allow sites to gauge the extent of any issue.

The product contains a number of collection points in the architecture that are useful for real time and offline
collection of performance related data. Information on the collection points are documented in the Performance
Troubleshooting Guideline Series (Doc Id: 560382.1) whitepapers available from My Oracle Support. Using the
guide, decide which statistics are important to the various stakeholders at your site, decide the frequency of
collection and format of any output to be provided. Use your sites Service Level Agreement (SLA), if it exists, for
guidance on what to report.

Refer to Product Monitoring Capabilities for more information.

Respecting Record Ownership


In Oracle Utilities Application Framework, the concept of ownership of records is used on each row in the database,
for configuration tables. A data element was added to data to indicate the owner of the object and is used to protect
key data supplied with the product from alteration or deletion. It is used by the online system to prevent the online
users accidentally causing critical data failures. The owner is also used by the upgrade tools protect the data from
deletion.

The ownership of the record determines what you can do with that record:
» Framework - If the record is owned by Framework then implementation teams cannot alter or delete the record
from the database as it is deemed critical to the running of the Framework. This is usually meta-data deemed
important by the Framework team. For example the user SYSUSER is owned by the Framework.
» Product - If the record is owned by the product (denoted by the product name or Base) then some changes are
permitted but deletion is not permitted as the record as it is necessary for the operation of the product. The
amount of change will vary according to the object definition.
» Customer Modification - If the record is owned by Customer Modification then the implementation has added
the record. The implementation can change and delete the record (if it is allowed by the business rules).
Basically you can only delete records that are owned by Customer Modification. All other records are maintained by
various utilities supplied with the product as part of upgrade and patch deployments.

It is possible to alter or delete the records at the database level, if permitted by database permissions, but doing this
will produce unexpected results so respect the ownership of the records.

12 - Technical Best Practices - Oracle Utilities Application Framework


Backup of Logs
By default product removes existing log files from $SPLSYSTEMLOGS (or %SPLSYSTEMLOGS% for Windows
platforms) upon restart. This is the default behavior of the product but may not be desirable for effective analysis as
the logs disappear.

To override this behavior the following needs to be done:

» A directory needs to be created to house the log files. Most sites create a common directory for all environments
on a machine. The size allocation of that directory will depend on how long you wish to retain the log files. It is
generally recommended that logs be retained for post analysis and then archived (according to site standards)
after processing to keep this directory relevant. Typically customers create a subdirectory under $SPLAPP
SPLAPP (or
%SPLAPP% for Windows platforms) to hold the files.
» Set the SPLBCKLOGDIR environment variable in the .profile (for all environments) or
$SPLEBASE/scripts/cmenv.sh (for individual environments) to the location you specified in the first step. For
Windows platforms then the environment can be set in your Windows profile or using
%SPLEBASE%/scripts/cmenv.cmd.
%SPLEBASE%/scripts/cmenv.cmd
» Logs will be backed up at the location specified in the format <datetime>.<environment>.<filename>
where <datetime> is the date and time of the restart, <environment> is the id of the environment (taken from
the SPLENVIRON environment variable) and <filename> is the original filename of the log.
Once the logs have been saved you must use log retention principles to manage the logs under SPLBCKLOGDIR to
meet your sites standards. Most sites archive the logs to tape or simply compress them after post processing the log
files (See Post Process Logs for more details on post processing).

Post Process Logs


The logs written by the various components of product provide valuable performance and diagnostic information.
Some sites have designed and developed methods to post process those logs to extract important information and
then report on it to relevant parties.

If the logs are retained by your site (see Backup of Logs for details on this process), the consider post processing
the logs on a regular basis before they are archived or deleted permanently. One approach is to extract that
information from the logs and loading the extracted data into some analysis repository for regular and trend
reporting. The diagram below illustrates the process.

Figure 1 – Post Processing Logs

Details of the logs written by the product are documented in the Performance Troubleshooting Guideline Series (Doc
Id: 560382.1) whitepapers available from My Oracle Support. Use these guides to determine what data to extract
from the logs for post processing.

13 - Technical Best Practices - Oracle Utilities Application Framework


Note:
Customers using the Application Management Pack for Oracle Utilities can use the log viewing and filtering
capability within Oracle Enterprise Manager to query log entries.

Check Logs For Errors


One of the most important tasks for a site is to regularly track errors output into logs. Whenever an error occurs in
product, an error record is written to the appropriate log for analysis. Some sites regularly check these logs for these
errors and using the information in the log, address the error condition.

Figure 2 – Filtering Logs

Viewing and checking for errors on a regular basis to quickly reduce the amount of error that may occur can detect
trends and common problems. The Performance Troubleshooting Guideline Series (Doc Id: 560382.1) whitepapers
available from My Oracle Support outlines the logs and error conditions contained within those logs.

Optimize Operating System Settings


One of the most important configuration settings for product is the operating system itself. The Installation Guide
provided with your product highlights the fact that the operating system parameters MUST be set to optimal values
for the product to perform optimally. Some sites have experienced large improvements in performance by heeding
this advice. Sites that have decided to ignore this advice have experience bad performance till the settings were
corrected.

Typically, the optimization of the operating system is performed during the implementation and uses the following
principles:
» The value of an individual operating system setting is the maximum value of any product on that machine. For
example, typically if Oracle is installed on the same machine, the values for those products are used. The settings
used in this way are usually are sufficient for the other products on that machine.
» If the machine is dedicated for a particular product or tier, then refer to the documentation in the installation guide
and the particular vendor's site for further advice on setting up the operating system in an optimal state.

Optimize connection pools


One of the settings that will affect performance is the size of the connection pools at each layer in the architecture.
Insufficient pool sizes can cause unnecessary transaction queues that can cause unnecessary delays. Conversely
setting the pool sizes too high can cause higher than usual resource usage on a machine also causing adverse
performance. So a balance needs to be struck for optimization.

During the implementation the size of the connection pools is determined and configured (with relevant growth
tolerances) depending on the usage patterns and expected peak/normal traffic levels. The goal, typically, is to have
enough connections available at normal traffic levels to minimize queuing and also have the right tolerances to cater
for any expected peak periods. Therefore, it is recommended:

14 - Technical Best Practices - Oracle Utilities Application Framework


» Set the number of initial connections to the normal number of connections expected. Remember this is not the
number of users that are connecting but the expected number of concurrent connections under normal load.
» Set the tolerances for pool growth (usually a maximum pool size and a connection increment) to the peak load
expected at any time. This tolerance will have to be tracked to determine the optimal level. Do not be tempted to
set this to a very large value as memory and network bandwidth calculations are usually dependent on the values
specified and wastage of resources needs to be minimized.
The product has up to three connection pools to configure:
» Client connections – These are the number of active connections supported on the Web Application Server from
the client machines. Remember that in an OLTP product the number of connections allocated is always less that
the number of users on the system. It needs to be sufficient to cater for the number of actively running
transactions at any given point of time. Refer to Configuring the Client Thread Pool Size for more information
about pool sizing.

Note:
Remember it is possible to set a different client connection pools per channel. For example, using Work
Managers you can limit online and/or web service calls.

» Database connections – These are the number of pooled connections to the database. The Framework holds
these connections open so that the overhead of opening and closing connections is minimized. For Version 2.x of
the product, the number of connections allocated is dictated in each individual web applications
hibernate.properties
hibernate.properties file using ucp or JDBC managed connection pools.
The figure below illustrates the connection pools available for each version of the Oracle Utilities Application
Framework:

Client

Client Connections

Web Application
Server
Business Application
Server

Database Connections
(via Hibernate/JDBC/UCP)

Database
Server

Figure 3 – Connection Pools by version of the Oracle Utilities Application Framework

Refer to the Server Administration Guide provided with your product for advice on the configuration and monitoring
of the connection pools.

Read the available manuals


Note:
Due to the ISV licensing of Web Application Servers, there may not be as much details as other platforms.
Refer to the vendor’s site for more detailed information.

15 - Technical Best Practices - Oracle Utilities Application Framework


The Oracle Utilities Application Framework product includes a set of documentation that should be downloaded with
the software and read as part of the implementation and support of product. The following technical documentation
is available on the distribution web:
» Installation Guide – Installation documentation for the base product including supported platforms and required
patches.
» Server Administration Guide – Documentation on how to configure and operate the server components of the
product.
» Batch Server Administration Guide5 - Details of the configuration settings and common operations for the batch
component of product.
» Developer documentation – This is detailed documentation on the customization aspects of the implementation
including standards for implementations. This is supplied with the online documentation and with the Oracle
Utilities SDK. The ConfigTools documentation is included in the online documentation.

Technical Documentation Set Whitepapers available


Apart from the product based documentation there are a number of whitepapers that provide specialist and
supplemental information for use during and post implementation. The table below lists the current available
technical documentation as well as the Knowledge base Id within My Oracle Support where the documentation
resides:

Doc Id Document Title Contents

560382.1 Performance Troubleshooting Guideline Series A series of whitepapers outlining the tracking points available in the
architecture for performance and a troubleshooting guide based
upon common problems.

560401.1 Software Configuration Management Series This series of documents outlines a set of generic processes (that
can be used as part of the site processes) for managing code and
data changes. This series includes documents that cover concepts,
change management, defect management, release management,
version management, distribution of code and data, management of
environments and auditing configuration.
The individual whitepapers are as follows:
» Concepts - General concepts and introduction.
» Environment Management - Principles and techniques for
creating and managing environments.
» Version Management - Integration of Version control and version
management of configuration items.
» Release Management - Packaging configuration items into a
release.
» Distribution - Distribution and installation of releases across
environments
» Change Management - Generic change management processes
for product implementations.
» Status Accounting - Status reporting techniques using product
facilities.
» Defect Management - Generic defect management processes for
product implementations.
» Implementing Single Fixes - Discussion on the single fix
architecture and how to use it in an implementation.
» Implementing Service Packs - Discussion on the service packs
and how to use them in an implementation.
» Implementing Upgrades - Discussion on the upgrade process

5 In Oracle Utilities Application Framework V4.3.x and above, this guide has been merged with the Server Administration Guide

16 - Technical Best Practices - Oracle Utilities Application Framework


Doc Id Document Title Contents

and common techniques for minimizing the impact of upgrades.

773473.1 Oracle Utilities Application Framework Security A whitepaper outlining the security facilities in the Oracle Utilities
Overview Application Framework.

774783.1 LDAP Integration for Oracle Utilities Application A whitepaper outlining the common process for integrating an
Framework based products external LDAP based security repository with the framework.

789060.1 Oracle Utilities Application Framework A whitepaper outlining all the various common integration techniques
Integration Overview used with the product (with case studies).

799912.1 Single Sign On Integration for Oracle Utilities A whitepaper outlining a generic process for integrating an SSO
Application Framework based products product with the Oracle Utilities Application Framework.

807068.1 Oracle Utilities Application Framework A whitepaper outlining the different variations of architecture that can
Architecture Guidelines be considered. Each variation will include advice on configuration
and other considerations.

836362.1 Batch Best Practices for Oracle Utilities A whitepaper outlining the common and best practices implemented
Application Framework based products by sites all over the world relating to batch.

970785.1 Oracle Identity Manager Integration Overview This whitepaper outlines the principals of the prebuilt integration
between Oracle Utilities Application Framework Based Products and
Oracle Identity Manager used to provision user and user group
security information.

1068958.1 Production Environment Configuration This whitepaper outlines common production level settings for
Guidelines Oracle Utilities Application Framework products.

1177265.1 What's New in Oracle Utilities Application This whitepaper outlines the changes since the V2.2 release of
Framework V4? Oracle Utilities Application Framework.

1290700.1 Database Vault Integration This whitepaper outlines the Database Vault integration available
with Oracle Utilities Application Framework V4.1 and above.

1299732.1 BI Publisher Integration Guidelines This whitepaper outlines some guidelines for integration available
with Oracle BI Publisher for reporting.

1308161.1 Oracle SOA Suite Integration This whitepaper outlines the integration between Oracle SOA Suite
and the Oracle Utilities Application Framework.

1308181.1 Oracle WebLogic JMS Integration This whitepaper outlines the inbuilt integration between Oracle
WebLogic JMS and the Oracle Utilities Application Framework.

1334558.1 Implementing Oracle ExaLogic and/or Oracle This whitepaper outlines how to cluster an Oracle Utilities Application
WebLogic Clustering Framework based product using Oracle WebLogic based clustering
including specific instructions for Oracle ExaLogic.

1375600.1 Oracle Identity Management Suite Integration This whitepaper outlines integration between the product and
components of Oracle Identity Management Suite.

1375615.1 Oracle Utilities Application Framework This whitepaper outlines common security requirements and outlines
Advanced Security how the security within the product and components of Oracle
Identity Management Suite can be used to implement that
requirement.

1474435.1 Oracle Application Management Pack for Oracle This whitepaper outlines the features and functions of the Oracle
Utilities Overview Application Management packs available for Oracle Enterprise
Manager.

17 - Technical Best Practices - Oracle Utilities Application Framework


Doc Id Document Title Contents

1506855.1 Integration Reference Solutions This whitepaper outlines all the integrations with Oracle technology
possible with Oracle Utilities Application Framework with solution
strategies.

1544969.1 Installing OUAF natively on Oracle WebLogic A step by step guide to installing products within Oracle WebLogic
natively.

1558279.1 Oracle Service Bus Integration This whitepaper describes direct integration with Oracle Service Bus
including the new Oracle Service Bus protocol adapters available.
Customers using the MPL should read this whitepaper as the Oracle
Service Bus replaces MPL in the future and this whitepaper outlines
how to manually migrate your MPL configuration into Oracle Service
6
Bus .

1561930.1 Using Oracle Text for Fuzzy Searching This whitepaper describes how to use the Name Matching and fuzzy
operator facilities in Oracle Text to implement fuzzy searching using
the @fuzzy helper function available in Oracle Utilities Application
Framework V4.2.0.0.0 and above

1606764.1 Audit Vault Integration This whitepaper describes the integration with Oracle Audit Vault to
centralize and separate Audit information from Oracle Utilities
Application Framework products. Audit Vault integration is available
in Oracle Utilities Application Framework 4.2.0.1.0 and above only.

1643845.1 Private Cloud Planning Guide This whitepaper outlines how to the recommended architecture of
implementing Oracle Utilities applications on Oracle's Private Cloud.

1644914.1 Migrating XAI to IWS This whitepaper outlines the features of Inbound Web Services
(IWS) which replaces the XML Application Integration (XAI)
functionality. It covers how to configure and migrate from XAI to IWS.

1682436.1 ILM Planning Guide This whitepaper outlines the Information Lifecycle Management
based product data management solution to help minimize storage
costs for Oracle Utilities Application Framework based products.

1929040.1 ConfigTools Best Practices This whitepaper outlines techniques to implement customizations
using the ConfigTools functionality of Oracle Utilities Application
Framework.

2014161.1 Keystore Configuration This whitepaper outlines how to use the keystore functionality of the
Oracle Utilities Application Framework including processes for
changing key values and maintaining the keystore.

2014163.1 Oracle Functional/Load Testing Advanced Pack This whitepaper outlines the Oracle Application Testing Suite based
for Oracle Utilities Overview testing solution for Functional and Load Testing available for Oracle
Utilities Application Framework based products.

2132081.1 Migrating From On Premise To Oracle Platform This whitepaper outlines the process of moving an Oracle Utilities
As A Service product from on-premise to Oracle Cloud Platform As A Service
(PaaS).

2196486.1 Batch Scheduler Integration This whitepaper outlines the Oracle Utilities Application Framework
based integration with Oracle’s DBMS_SCHDEULER to build, manage
and execute complex batch schedules.

2211363.1 Enterprise Manager for Oracle Utilities This whitepaper outlines the process of converting service packs to

6 In Oracle Utilities Application Framework V4.2.0.1.0, Oracle Service Bus Adapters for Outbound Messages and Notification/Workflow are available

18 - Technical Best Practices - Oracle Utilities Application Framework


Doc Id Document Title Contents

Whitepaper: Service Pack Compliance allow the Application Management Pack for Oracle Utilities to install
service packs using the patch management capabilities.

2214375.1 Web Services Best Practices This whitepaper outlines the best practices of the web services
capabilities available for integration.

This documentation is updated regularly with each release of product with new and improved information and
advice. Announcements of updates to whitepapers may be tracked via The Shorten Spot.

Using Automated Test Tools


Some sites around the world use third party testing tools for performance and regression testing. While product is
open in terms of the standard it uses not all test tools are applicable to simulate exact expected traffic. In choosing
an automated testing tool that you can use with product the following must be supported:
» Support for HTTP – The automated test tool must be able to trap HTTP traffic, as this is the traffic used by the
product. If the tool supports HTTPS, and you intend to use the HTTPS protocol, be careful as support for HTTPS
varies greatly with most testing tools.
» JSP Support – product uses JSP coding to perform most functions. A tool that can leverage this technology will
enable screens to be recognized.
» Support simulation of IE caching – The product client utilizes the Internet Explorer cache to locally hold an
image of the screen for performance reasons. The automated test tools needs to be able to simulate this behavior
otherwise results will not reflect reality.
» Support Pop up screens – The product utilizes pop up windows for some lists and some searches as well as
confirmation and error messages. The automated test tool needs to be able to support the use of these to
adequately simulate product transactions.
» Valid calls – Ensure that the test tools simulate valid calls to the product. A valid call is a call that the browser
user interface issues to the web server or a call that the Web Service component will accept. An invalid call that is
sent by a test tool to the product may result in unpredictable results. Check EVERY call is valid (try them with
browser user interface to verify the call) and fix any invalid calls.
Oracle Utilities offers a set of Oracle Application Testing Suite components for functional, regression and load
testing. Refer to Oracle Functional/Load Testing Advanced Pack for Oracle Utilities Overview (Doc Id: 2014163.1)
from My Oracle Support for more information.

Custom Environment Variables or JAR files


Implementations of the product sometimes use third party java classes or third party tools to perform specialist
functions. Sometimes these tools require additional configurations settings that can be integrated in the
infrastructure provided by the product. For example, if you use a third party jar file to be called by the product then
you will need to add it to the CLASSPATH to ensure it is picked up by the runtime.

Luckily, there is a feature that allows custom environment variables settings and other commands to be run after the
splenviron.sh script (or splenviron.cmd on Windows) has been executed.

To do this create a cmenv.sh script (or cmenv.cmd on Windows) in the $SPLEBASE/scripts directory
%SPLEBASE%\
%SPLEBASE%
(%SPLEBASE% \scripts on Windows) with the commands you want to execute. For example, if an implementation
used AXIS2 jar files to call web services. Well you place the AXIS2 jar files in a central location (e.g. /axis/lib in
this example) and create the cmenv.sh/cmenv.cmd
cmenv.sh cmenv.cmd script with the lines:

export CLASSPATH=/axis/lib/axis.jar;$CLASSPATH
or

19 - Technical Best Practices - Oracle Utilities Application Framework


set CLASSPATH=c:\axis\lib\axis.jar:%CLASSPATH%
When splenviron.sh script (or splenviron.cmd on Windows) runs it will look in the scripts directory for the
existence of the cmenv.sh script (or cmenv.cmd on Windows) and executes it.

Additional to this, it is possible to do this WITHOUT adding the cmenv.sh script (or cmenv.cmd on Windows). Set the
CMENV environment variable to the location of a script, with the above commands contained, BEFORE running any
command and splenviron.sh script (or splenviron.cmd on Windows).

The CMENV facility is for global changes as it applies across all environments and the cmenv.sh/cmenv.cmd
cmenv.sh cmenv.cmd
solution is per environment. You can use both as CMENV is run first then cmenv.sh/cmenv.cmd
cmenv.sh cmenv.cmd.
cmenv.cmd

Note:
It is possible, using this technique, to manipulate any environment variable used by the product but this is
not recommended.

Secure default userids


There are a number of default users (and associated default passwords) that are supplied with the installation of
product. It is recommended that the default users and their passwords be altered according to the site security
standards. Refer to the Security Guide provided with the product for the list of supplied default users.

Consider different userids for different modes of access


Note:
It is not possible to configure product to use different database accounts for access. All modes of access
will share the relevant pool of database connections as a single database user.

In Oracle Utilities Application Framework version 4.0.1 and above the authorization userid is
available as the CLIENT_IDENTIFIER on Oracle database sessions.

By default the application is configured to either use SYSUSER,


SYSUSER SPL or IWS to access the product for online, web
services and in background processing. This means any audit or staging records are associated with a common
userid. Some implementations have created additional userids to use as a filter for reporting, traceability and
auditing purposes. The following guidelines may be used in this area:

» Create a different userid for integration transactions. This allows tracking of integration within the architecture. It is
also possible to assign each transaction a different userid for integration, as it is passed as part of the transaction
but usually most customers consider this overkill.
» Create a different userid for each background interface. This allows security and traceability to be tracked at a
lower level.
» Create a generic userid for mainstream background processes. This allows tracking of online versus batch
initiation of processes (especially To Do, Case and Customer Contact processing).

Note:
Remember that any product user must be defined to the product as well as the authentication repository.
In Oracle Utilities Application Framework 4.2.x and above, there is a separation between authentication
and authorization identifiers.

Don’t double audit

20 - Technical Best Practices - Oracle Utilities Application Framework


The product has an auditing facility that is soft configured. The facility can be enabled by configuring the auditing
parameters (location of the audit data, audit rules etc) against the meta data definitions of the tables. This ensures
that any online or web service based updates are audited according to those rules. Auditing is used to track online
changes to critical entities.

The financial component of product already has a separate auditing facility, as all customers generally require it. Any
changes to financial information such as payments, adjustments, bills etc are registered in the Financial Transaction
tables. Therefore enabling auditing on those entities is not required and constitutes double auditing (i.e. auditing
information is stored in two places).

While the impact of the double auditing may be storage related, enabling auditing on bills, for example, can have a
performance hit on online bills. Customers with large numbers of bill segments per bill (i.e. several hundred) have
experienced negative performance impact during online billing when double auditing is enabled on financial entities.
This does not affect batch performance as auditing is not used in batch.

Use Identical Machines


The flexibility of the technology used by the product allows the ability to mix-and-match different hardware for a
configuration. While this may be attractive and allow for some innovative solutions, it makes overall manageability
and operations harder. Hence, it should be avoided.

Having identical hardware allows for ease of stocking spare parts, better reproducibility of problems (both software
and hardware), and reduces the per platform testing cost. This cost, in many cases, will surpass the savings from
reusing existing disparate hardware.

Regularly restart machines


It is generally a good practice to restart servers periodically. This recovers slow memory leaks, temporary disk space
build-up, or other hidden problems that may manifest themselves only when the server is up for such a long
duration. This is a simple way to avoid unexpected or unexplained failures.

Most hardware vendors have recommendations on optimal time intervals to restart machines. Some vendors
strongly encourage the issue for maintenance reasons. Check with your vendor for specifics for your platform.

Avoid using direct SQL to manipulate data


Note:
Issuing SQL data manipulation language (DML) statements other then SELECT statements directly
against base tables can cause data integrity to be compromised and can invalidate your product warranty.
All data update access should be through maintenance objects that ensure data integrity is maintained.

Unless the outcome can be verified as correct, you should not use ANY direct SQL statement against product
database as you may corrupt the data and prevent the product from operating correctly.

All the data maintenance and data access in the product is located in the Maintenance Objects. The Maintenance
Objects validate ALL changes against your sites business rules and the rules built into the product. If you are using
the objects to manipulate the data then integrity is guaranteed as:

» All the validations including business rules, calculations and referential integrity are contained within the
Maintenance Objects.
» The Maintenance Object performs a commit when all validations are successful. If any validation is failed the
whole object is rolled back to a consistent state. In background processing, a commit is performed after a number

21 - Technical Best Practices - Oracle Utilities Application Framework


of Maintenance Objects have been processed (known as the Commit Interval). At that point the last commit point
is registered on the Batch Control for restarting purposes. If the background process fails between commit points,
the database is rolled back to the last commit point.
» All access modes (online, web services, batch/background processing) from code supplied with the product use
the Maintenance Objects for processing. This means that integrity is guaranteed across all modes.
» Any customizations (algorithms etc) using the Oracle Utilities SDK will should be using the Maintenance Objects.
» Using incorrect SQL may violate any of the validations and even make the system unusable.
» If you have to manipulate data within the product, use one or more, of the following provided methods:
» The browser user interface.
» Inbound Web Services.
» Conversion Toolkit.
» Software Development Kit.

Minimize Portal Zones not used


In the Oracle Utilities Application Framework, portals were introduced to all sites to decide what zones and their
sequence should for different user groups. For performance reasons, it is recommended that you configure portal
preferences to collapse zones that are not needed every time a portal is displayed.

The system does not perform the processing necessary to build collapsed zones until a user expands the zone, so
configuring them as initially collapsed improves response times. This is especially relevant for the To Do zones that
may take a while if the number of To Do records is excessive.

Routine Tasks for Operations


After the implementation of the product has been completed there is a common set of tasks that IT groups perform
to maintain the system. The table below lists these tasks:

Task Comments

Perform Backups Perform the backup of the database and file system using the site procedures and the tools
designated for your site.

Post Process Logs Check the log files for any error conditions that may need to be addressed. Refer to Post Process
Logs and Check Logs For Errors for more details.

Process Performance Data Collate and process day's performance data to assess against any Server Level targets. Identify
any badly performing transactions.

Perform Batch Schedule Execute the batch schedule agreed for your site. This will include overnight, daily, hourly and ad-
hoc background processes.

Rebuild Statistics Oracle recommend the database statistics for the product schemas to be rebuilt on a regular basis
so that the access to the SQL is optimized.

File Cleanup On a regular basis, the output files from the background processes and logs will need to be
archived and removed to minimize disk space usage.

Storage Manage Data not The Oracle Utilities Application Framework features can use Information Lifecycle Management to
required minimize storage. Refer to Information Lifecycle Management for more details.

Run Cleanup Batch Jobs There are a number of background processes that remove staging records that have been already
successfully processed. Refer to Removal of Staging Records for more details.

22 - Technical Best Practices - Oracle Utilities Application Framework


Note:
The tasks listed above do not constitute a comprehensive list of what needs to be performed. During the
implementation you will decide what additionally needs to be done for your site.

Typical Business Day


One of the patterns experienced at sites is the notion of a common definition of a business day. Typically during the
implementation the business day is defined for planning purposes. It defines when the call center is at peak or non-
peak, background processing can be performed and when backups are performed during the business day.

The figure below illustrates a simplified model of a typical customer business day:

Monitoring Monitoring
Batch Overnight Batch Daily/Ad-hoc/Hourly Batch Overnight Batch

Backups Backup Backup


Online Off Peak Peak Off Peak

0 4 8 12 16 20 0
Figure 4 – Example Typical Business Day

Note:
The above diagram is for illustrative purposes only and could vary for your site.

Typically a business day contains the following elements:

» There is a peak online period where the majority of call center business is performed. Typically this is performed
in business hours varying according to local custom.
» There is a call center off peak period where the volume of call center traffic is greatly reduced compared to the
peak period. Typically in call centers, which operate 24x7, this represents overnight and weekends. At this time
the call center is reduced in size (usually a skeleton shift). Some sites do not operate in non-peak periods and rely
on automated technology (e.g. IVR) to process transactions such as payments etc.
» Backups are either performed at the start of the peak period or the end of the peak period. The decision is based
upon risk around failure of the background processing and its risk to the impact of online processing. The product
specific background processes can be run anytime but avoiding them during peak time will maximize the available
computing resources to the successful processing of call center transactions. The backup at the end of the peak
period is the most common patterns amongst product customers.
» Background processes are run at both peak and off peak times. The majority of the background processing is
performed at off-peak times to maximize the computing resources to the successful completion of the background
processing. The background processing that is run at off peak times is usually to check ongoing call center
transactions for adherence to business rules and process interface transactions ready for overnight processing.
» Monitoring is performed throughout both peak and off peak times. The monitoring regime used may use manual
as well as automated tools and utilities to monitor compliance against agreed service levels. Any non-compliance
is tracked and resolved.
The definition of the business day for you site is crucial to schedule background processing and set monitoring
regimes appropriate for the traffic levels expected.

Login Id versus Userid

23 - Technical Best Practices - Oracle Utilities Application Framework


Note:
This facility is available in Oracle Utilities Application Framework V4 and above only .

In the past releases of the Oracle Utilities Application Framework the userid that could be used to login was
restricted to 8 characters in length. In Oracle Utilities Application Framework V4 and above, it is possible to use a
user identifier of up to 256 characters in length.

In Oracle Utilities Application Framework V4 and above the concept of a Login Id is supported. This attribute is the
used by the framework to authenticate the user. For backward compatibility the 8 character userid field is still used
for auditing purposes internally. Therefore both Userid and Login Id should be populated. They can be different or
the same values.

Note:
The Login Id can be changed post creating the user identity to support name change, acquisitions etc.
The short User Id is not changeable as records in the product already use this value.

The Login Id can be set manually, via Oracle Identity Manager or set in a class extension to auto generate a value.

Figure 5 – Login Id

Hardware Architecture Best Practices


Note:
There is a more detailed discussion of effective architectures in the Oracle Utilities Application Framework
Architecture Guidelines (Doc Id: 807068.1) whitepaper available from My Oracle Support. Refer to that
document for further advice.

The product can be run on various combinations of hardware based architectures. When choosing an architecture
that is best suited to a site there are a number of key factors that must be considered:
» Cost – When deciding a preferred architecture, the total cost of the machine(s) and infrastructure needs to be
taken into a consideration. This should the ongoing costs of maintenance as well as power costs.
» IT Maintenance Effort – When deciding a preferred architecture, the manual or automated effort in maintaining
the hardware in that architecture needs to be factored into the solution.
» Availability – One of the chief motivations for settling on a multi-machine architecture is requiring the architecture
to support high availability. When deciding a preferred architecture, the tolerance and cost of availability needs to
be factored into the solution.

Single Server Architecture


If the site is cost sensitive and/or the availability requirements allows it, then having all the architecture on a single
machine is appropriate. This is known as the single server architecture. This configuration is popular with some sites
as:
» The cost of the hardware can be minimal (or least very cost effective).

24 - Technical Best Practices - Oracle Utilities Application Framework


» Maintenance costs can be minimized with the minimal hardware.
» Virtualization software (typically part of the operating system or third party virtualization software) can be used to
partition the machine into virtual machines.
The one issue that makes this solution less than ideal is the risk of unavailability due to hardware failure. Customers
that choose this solution, typically address this shortcoming by buying a second machine of similar size and using it
for failover, disaster recovery as well as non-production. In essence, if the primary hardware fails then the backup
machine assumes the responsibility for production till the hardware fault is resolved. In this case, additional effort is
required to keep the secondary machine in synchronization with the primary.

The diagram below illustrates the single server architecture:

Figure 6 – Example Single Server Architecture

Simple Multi-tier Architecture


One of the variations on the single server architecture is the simple multi-tier architecture. In this hardware
architecture, the database server and Web Application Server/Business Application Server are separated on
different machines. For product V1.x customers, you can also separate the Web Application and Business
Application Servers.

This is chosen by customers who want to optimize the hardware for the particular tier (settings and size of machine)
and therefore separate the maintenance efforts for each server. For example, Database Administrators need only
access the Database Server to perform their duties and set the operating system parameters optimized for the
database.

Unfortunately the solution can have a higher cost than the single server solution and still does not address the
unavailability of any machine in the architecture. Customers that have used this model adopt a similar solution to the
single server architecture (duplicate secondary machines at a secondary site) but also have the option of having
both machines in the architecture being the same size and shifting the roles when availability is compromised. For
example, if the database server fails, the Web Application Server can be configured to act as a combination of the
Database Server and Web Application Server.

The figure below illustrates the Simple Multi-Tier Architecture:

25 - Technical Best Practices - Oracle Utilities Application Framework


Figure 7 – Single Multi-Tier Architecture

Machines in this architecture can be the same size or different sizes depending on the cost/benefits of the various
variations. Typically customers use a smaller machine for the Web Application Server as compared with the
database server.

Multiple Web Application Servers


To support higher availability for the product, some sites consider having multiple Web Application servers. This
allows online users to be spread across machines and in the case of a failure be diverted to the machine that is
available. To achieve this, the site must use a load balancer (see Load balancers discussion later in this document).
At the time of failover the load balancer will redirect traffic to the available server. This is made possible as the
product is stateless.

The Web Applications Servers are either clustered or managed. Refer to the discussion in the Clustering or
Managed? section of this document for advice.

This architecture is quite common as it represents flexibility as one of the Web Application Servers can be dedicated
to batch processing in non-business hours making the architecture more cost effective. Typically the Web
Application Server software is shutdown to allow batch processing to use the full resources of the machine while
allowing users (usually a small subset) to process online transactions.

The only drawbacks with this solution are a potential higher cost than a multi-tier solution and the potential impact of
database unavailability. Customers that use this architecture overcome the potential unavailability of the database
by either using a secondary site to act as the failover or using one of the Web Applications in a failover database
server role. The latter is less common, as most customers find it more complex to configure, but is possible with this
is a possibility with this architecture.

The figure below illustrates the Multi-Tier Architecture:

26 - Technical Best Practices - Oracle Utilities Application Framework


Figure 8 – Example Multi-Application Server Architecture

Machines in this architecture can be the same size or different sizes depending on the cost/benefits of the various
variations. Typically customers use a smaller machine for the Web Application Server as compared with the
database server.

High Availability Architecture


The most hardware intense solution is where all the tiers in the architecture have multiple machines for high
availability and distribution of traffic. The solution can vary (number of machines etc) but have the following common
attributes:
» There is no single point of failure. There is redundancy at all levels of the architecture. This excludes redundancy
in the network itself, though this is typically out of scope for most implementations.
» The number of servers will depend on segmentation of the traffic between call centers, non-call centers,
interfaces and batch processing. It is possible to reuse existing servers or setup dedicated servers for different
types of traffic.
» Availability can be managed with either hardware based solutions, software based solutions or a valid
combination of both.
» The number of users will dictate the number of machine to some extent. Experience has shown, that a large
number of users tend to be better served, from a performance and availability point of view, by multiple machines.
Refer to the What is the number of Web Application instances do I need? for a discussion on this topic.
» The Web Applications Servers are either clustered or managed. Refer to the discussion in the Clustering or
Managed? section of this document for advice.
» Database clustering is typically handled by the clustering or grid support supplied with the database management
system.
This solution represents the highest cost hardware from both hardware and a maintenance perspective. Historically
customers with large volumes of data or specific high availability requirements have used this solution successfully.

The figure below illustrates the High Availability Architecture:

27 - Technical Best Practices - Oracle Utilities Application Framework


Figure 9 – Example High Availability Server Architecture

Failover Best Practices


Failover occurs when a server in your architecture becomes unavailable due to hardware or software failure.
Immediately after the failure the active components architecture route the transactions around the unavailable
component to an alternative or secondary component (on another site) to maintain a level of availability. This routing
can be done automatically through the use of high availability software/hardware or manually by operators.

The Oracle Utilities product architecture supports failover at all tiers of the architecture, using either hardware or
software based solutions. Failover solutions can be varied but a few principles have been adopted successfully by
existing customers:

» Failover solutions that are automated are preferable to manual intervention. Depending on the hardware
architecture used the failover capability can be automated.
» Availability goals play a big part in the extent of a failover solution. Sites with high availability targets tend to favor
more expensive, comprehensive hardware and software solutions. Sites with lower availability (or no goals) tend
to use manual processes to handle failures.
» Failover is built into the software used by the products (though it may entail an additional license from the relevant
vendor). For example, Web Application Server vendors have inbuilt failover capabilities including load balancing,
which is popular with customers.
» Hardware vendors will have failover capabilities at the hardware or operating system level. In some cases, it is an
option offered as part of the hardware. Sites use the hardware solution in combination with a software based
solution to offer protection at the hardware level. In this case, the hardware solution will detect the failure of the
hardware and work in conjunction with the software solution to route the traffic around the unavailable component.
» Failover is made easier to implement for the product as the Web Application is stateless. The users only need
connection to the server while they are actively sending or receiving data from the server. While they are inputting
data and talking on the phone they are not consuming resources on the machine. For each transaction the
infrastructure routes the calls across the active components of the architecture.
» At the database level the common failover facility used is the facility provided by the database vendor. For
example, Oracle database customers typically implement RAC. Failover configuration at the database is the least
used by existing sites, as the cost of having additional hardware is usually prohibitive (or at least not cost
configurable).

28 - Technical Best Practices - Oracle Utilities Application Framework


» Sites wanting to have failover and disaster recovery but cannot afford both consider a solution which combines
both. In this case, the disaster recovery configuration is used as a failover for non-disasters.
For any failover solution to be effective, the site typically analyses all the potential areas of failure in their
architecture and configures the hardware and software to cover that eventuality. In some cases, sites have chosen
NOT to cover eventualities of extremely low probability. Using hardware Mean Time between Failure (MTBF) values
from hardware vendors can assist in this decision.

When designing a failover solution then the following considerations are important:
» Determine what the availability goals are for your site.
» Determine the inbuilt failover capabilities of the hardware and software that your site is using. This may reduce
the cost of implementing a failover solution if it is already in place.
» List all the components that need to be covered by a failover solution. Review the list to ensure all aspects of
"what can fail?" are covered.
» Design your failover solution with all the above information in mind that you can automate (within reason) for your
site. Ensure the solution is simple and reuses already available infrastructure to save costs.
Commonly sites use the following failover techniques in the architecture:

Tier Common failover Solution

Network Load Balancer (hardware for large numbers of users; software based for others). Consider
redundant load balancers for "no single point of failure" requirements.

Web Application Server/ Business Use inbuilt clustering/failover facilities unless load balancer is doing this. Consider hardware
Application Server solutions for batch or interface servers.

Database Server Use inbuilt failover facilities in database unless hardware solution is more cost effective.

Online and Batch tracing and Support Utilities


The Oracle Utilities Application Framework provides a set of utilities to aid in capturing information for support. Refer
to My Oracle Support Doc ID 1206793.1 (Master Note for Oracle Utilities Framework Products - Online and Batch
tracing and Support Utilities) for details and training on using these utilities to provide critical information to help
expedite support requests.

General Troubleshooting Techniques


Whilst the troubleshooting features of the product are documented in detail in the online help, Performance
Troubleshooting Guideline Series (Doc Id: 560382.1) whitepapers available from My Oracle Support and other
manuals there are a number of techniques and guidelines that can be used to help identify problems:
» Check the logs in the right order – The log files are usually the best spot to look for errors as any error is
automatically logged to then by the product. The most efficient method is to look for the logs from the bottom up
as if the error appears in the lower ranks of the architecture that is more likely where the error occurred7. Typically
you look for records of type ERROR in the following logs (located in $SPLEBASE/logs/system):
$SPLEBASE/logs/system

Log files Comments

spl_service.log Business Application Server log. In some versions of the Oracle Utilities Application Framework this
log does not exist as it is included in the spl_web.log.
spl_web.log Errors in here can be service or database
related.

7 The theory is that the first place the error occurs is the most likely candidate tier.

29 - Technical Best Practices - Oracle Utilities Application Framework


Log files Comments

spl_xai.log/spl_iws.log
spl_xai.log/spl_iws.log Web Services Integration also known as XML Application Integration (XAI) or Inbound Web
Services log. This log file is exclusively used for the XAI Servlet. More detail can exist in the
xai.trc file if tracing is enabled.

spl_web.log Web Application Server log. This is typically where errors from the browser interface are logged. If
errors are repeated from the spl_service.log then the issue is not in the Web Application
Server software but in the Business Application or below.

Note:
There are other logs that are related to the J2EE Web Application Server used that exist in this directory
or under the location specified in the J2EE Web Application Server.

» First error message is usually the right one – When an error occurs in the product, it can cause other errors.
Usually the first occurrence of any error is usually the root cause. This is more apparent when a low level error
occurs which ripples across other processes. For example, if the database credentials are incorrect then the first
error will be that the product cannot connect to the database but other errors in the product will appear as meta
data cannot be loaded into various components. In this case fixing the database error will correct the other errors
as well.
» Not all errors are in fact errors – The product will issue errors if components are missing but are able to
overcome this issue. For example, if meta-data is missing the system may resort to using default values. In most
cases this means the product can operate without incident but the cause should be resolved to ensure correct
behavior.

Note:
In some versions, such errors are reported as a WARNING rather than an ERROR.
ERROR

» Tracing can help find the issue – The products includes trace facilities that can be enabled to help resolve the
error. This information is logged to the logs above (and other server logs) that can be used for diagnosis as well
as for support calls. Refer to Online and Batch tracing and Support Utilities for more information about these tools.
» There are usually a common set of candidates – When an error occurs there are a number of typical
candidates for causing issues:
» Running out of resources – The product uses the resources allocated to it that are available on the machine. If
some capacity is reached in the physical machine (memory or disk space are typical resource constraints) or
logical, via configuration, such as JVM memory allocations, then the product will report a resource issue. In some
cases, the product will directly report the problem in the logs but in some case it will be indirectly. For example, if
the disk space is limited then a log may not be written which can cause issues.
» Incorrect configuration – If the product configuration files or internal configuration are incorrect for any reason,
they can cause errors. A common example of this is passwords which either are wrong or have expired. File
paths are also typical settings to check.
» Missing metadata – The product is meta-data driven. If the metadata is incorrect or missing then the behavior of
the product may not be as expected. This can be hard to detect using the usual methods and typically requires
functionality testing rather than technical detective work.
» Out of date software – All the software used in the solution, whether part of the product or infrastructure, has
updates, patches and upgrades to contend with. Upgrading to the latest patch level typically can address most
issues.
Refer to the Performance Troubleshooting Guideline Series (Doc Id: 560382.1) whitepapers available from My
Oracle Support for more techniques and additional advice.

30 - Technical Best Practices - Oracle Utilities Application Framework


Overriding System Date
By default, the product obtains the date from the database server using an internal call. This is to ensure a
consistent recording of the date and time across the system, irrespective of the time zone of the user. It is possible
in some products, to adjust this time to the local time using the user time zone feature of Oracle Utilities Application
Framework V4.0 and above.

It is possible to set a date for testing purposes at the system level as well as at user level.

Note:
This facility is not recommended for use in production environments. It is only recommended in non-
production environments such as testing and development.

Note:
To avoid issues with data values, to enable this feature, after configuring at the system or user level the
setting spl.runtime.options.allowSystemDateOverride must be set to true in the
spl.properties file for online, business application server and/or batch.

System Wide
To set a specific date for an environment, for testing purposes, a Feature may be added using the Feature
Configuration menu option. The feature to use is a General System Configuration feature. You may create a
General System Configuration feature if one does not exist on your environment. The System Override Date option
may be added to this feature and the date specified in international ISO format (e.g. YYYY-
YYYY-MM-
MM-DD).
DD An example of
this feature is shown in the figure below.

Note:
Only one General System Configuration feature should exist per environment.

Figure 10 – Adding a feature for System Override Date

Once saved, this date will be used, across ALL users on that environment, instead of the date for online and Web
Service operations8. You may need to flush the online cache to reflect the change across the system. Refer to the

8 In batch there is a standard parameter for all jobs, the Batch Business Date, that performs the same function for that mode.

31 - Technical Best Practices - Oracle Utilities Application Framework


Server Administration Guides supplied with the product for techniques on how to flush the cache. To reverse the
configuration, remove the System Override Date option from the Feature.

User Specific Date

Note:
This feature was added in Oracle Utilities Application Framework V4.1 and consequently is only
available for that version and above.

Whilst the system override is not appropriate as some users require specific dates, especially for testing, then it is
also possible to override the system date per user. This will force the online to use the override value for the user, if
it exists, instead of any system override or system date.

To achieve this, an Override System Date characteristic can be added to the individual user record using the User
menu option as shown in the figure below. As with the System Override, the Characteristic value should be in ISO
YYYY-
format (e.g. YYYY MM-
-MM DD).
-DD

Figure 11 – Adding a System Override Date per user

As with the System wide override, the user should refresh the cache to reflect the change. To reverse the
configuration, remove the characteristic from the user record.

Transaction Timeouts
By default, transactions are subject to time limits imposed at the infrastructure level (at a network or database level).
In most cases, sites do not impose any explicit time limits.

The idea of time limits on transactions is to catch any long running online or web service transaction from causing
inefficiencies in traffic volumes across your configuration. To use time limits effectively, the site would want to set
limits on a number of key common transactions to keep a cap on resource usage across the enterprise.

In Oracle Utilities Application Framework V4.1 and above, a set of optional configuration settings has been
added to allow sites to specify global time limits and transaction level time limits on individual calls within a
transaction.

Note:
For Oracle Utilities Application Framework V4.1 it is enabled using patch 10356853.

The concepts of timeouts contained in the product are as follows:

» Timeouts may be set globally or overridden on individual services, business objects, business services or service
scripts.
» Timeouts are tracked throughout the transaction execution but the timeout is explicitly checked prior to any
database access to ensure the timeout has not been reached. If the transaction has multiple database access
statements, the current cumulative transaction time is checked at each statement. If no database access is made
by the transaction then timeouts are not checked and therefore not enforced.

32 - Technical Best Practices - Oracle Utilities Application Framework


» Timeouts tolerances are taken from the parent object. For example, if a service script calls business objects
and/or business services then the time allocated to the service script applies across all the calls not the timeouts
of the individual business objects and/or business services.
» The timeout values only apply to online transaction and web services calls, therefore batch transactions are not
subject to using this timeout facility.
» When a timeout limit is exceeded, an appropriate error message is returned and the transaction is rolled back to
the last commit or savepoint.
» Timeout values are not precise. The transaction will not be terminated at the exact timeout value. When timeouts
occur, additional processing time for transaction rollback and networking9 may need to be executed, which is
performed after the timeout occurs, before returning control back to the originating user.
Most of the transactions in the product are subject to one timeout. If the transaction is part of a portal with multiple
transactions, each individual zone is subject to its own timeout. If timeouts are not consistent across a portal then
some zones may be timed out whilst others continue processing.

Configuration of timeouts
To configure the use of timeouts for online or Web Service traffic a number of configuration settings need to be
specified in configuration files within the Web Application Server and Business Application Server. The parameters
that control timeouts are as follows:

Setting Comment

ouaf.timeout.business_object.<bocode> bocode>
Maximum amount of time (in seconds) for business object <bocode > can
execute before timeout. This timeout will override
ouaf.timeout.business_object.default when executing this specific
business object. The values for <bocode>
bocode> may be any valid business object.

ouaf.timeout.business_object.default Maximum amount of time (in seconds) an invokeBO call can last. All queries
issues by the business object will have life time remaining time of execution of
this business object call. This is a general timeout and can be overridden for an
individual business object, if desired.

ouaf.timeout.business_service.<bscode> Maximum amount of time (in seconds) for business service <bscode> can
execute before timeout. This timeout will override
ouaf.timeout.business_service.default when executing this specific
business service. The values for <bscode> may be any valid business service.

ouaf.timeout.business_service.default Maximum amount of time (in seconds) an invokeBS can execute before
timeout. All queries issues by the business service will have life time remaining
time of execution of this business service call. This is a general timeout and can
be overridden for an individual business service, if desired.

ouaf.timeout.query.default
ouaf.timeout.query.default Maximum amount of time (in seconds) an individual query can run if it is not
restricted by a service or some other timeout. For instance, if the online
application is issuing a query, which is not a part of a service call, a script or a
Business Object read, the query will be affected by this timeout. Otherwise, the
timeout will be set to remaining time of a logical transaction it belongs to (service
call, script, Business Object execution).

ouaf.timeout.script.<scriptname> Maximum amount of time (in seconds) for script <scriptname>


scriptname> can execute
before timeout. This timeout will override ouaf.timeout.script
ouaf.timeout.script.default
script.default
when executing this specific service script. The values for <scriptname>
scriptname> may
be any valid service script.

9 Such as load balancing

33 - Technical Best Practices - Oracle Utilities Application Framework


Setting Comment

ouaf.timeout.script.default Maximum amount of time (in seconds) a service script call can execute before
timeout. All queries issues by the script will have life time remaining time of
execution of this script call. This is a general timeout and can be overridden for
an individual service script, if desired.

ouaf.timeout.service.<service> Maximum amount of time (in seconds) for service <service> can execute
before timeout. This timeout will override ouaf.timeout.service.default
when executing this specific service. The values for <service> may be any
valid application service.

ouaf.timeout.service.default Maximum amount of time (in seconds) a service call can execute before timeout.
All queries issues by the service will have life time remaining time of execution
of this service call. This is a general timeout and can be overridden for an
individual service, if desired.

For example, in general:

ouaf.timeout.service.default=300
For example, to specify values for logical transactions that can be overridden with values specific for a service, script
or business object operation:

ouaf.timeout.service.default=300

#specified a value for individual service

ouaf.timeout.service.CILTPD=600
In the example above a value specific for service CILTPD was specified.

To implement the transaction timeouts for your site then the following files need to be updated:

Specify the value of ouaf.timeout.query.default in the Web Application server spl.properties file:

$SPLEBASE/etc/conf/root/WEB-
$SPLEBASE/etc/conf/root/WEB-INF/classes/spl.properties (Linux/Unix)
or

%SPLEBASE%\
%SPLEBASE%\etc\
etc\conf
conf\
nf\root\
root\WEB-
WEB-INF\
INF\classes\
classes\spl.properties (Windows)
Specify the other timeout parameters in the Business Application Server spl.properties file:

$SPLEBASE/etc/conf/service/spl.properties (Linux/Unix)

or

%SPLEBASE%\
%SPLEBASE%\etc\
etc\conf\
conf\service\
service\spl.properties (Windows)

Note:
Changing this file manually may lose changes across upgrades. If the changes need to be preserved
across upgrades then it is recommended to implement a custom template for this file. Refer to the Server
Administration Guide supplied with your product for details of this process.

Implementing guidelines

34 - Technical Best Practices - Oracle Utilities Application Framework


The timeout specification is flexible and powerful but can be overused if over configured. The following guidelines
should be taken into account when using this facility:
*.default parameters) according to your site expectations and any service level
» Specify global defaults (*.default
agreements.
» Consider the same common value for global defaults unless the customizations at your site dictate different
values for each object type default.
» Only set overrides on specific objects/services/scripts on the subset of these objects that are critical to your site
and require explicit different timeout values from the relevant defaults. In other words, avoid setting overrides on
all services individually.
» In portals, each zone may have different timeout values. Take this into account when designing timeout values.
» Review timeout values with the business on a regular basis to see if the value currently set is appropriate for the
site.

Java Garbage Collection Guidelines


One of the major features of java is its ability to perform garbage collection. In the past programmers would have to
manage their memory allocation, and more importantly, memory de-allocation to manage the memory footprints of
their products. If this was not performed then the program may run out of physical or logical memory and stop
executing (i.e. crash). Java attempts to help developers by automatically clear up as much memory when it hits a
critical threshold. It basically looks for inactive bits of code or objects and cleans them up to free up memory for
active processes. Hence the term garbage collection, which implies that java collects (i.e. removes) garbage (i.e.
inactive) code and objects.

Java performs garbage collection automatically when it detects a memory tolerance has been reached. This has the
advantage that it can help prevent out of memory conditions within the java virtual machine. Now I did not use the
word guarantee as if the java virtual machine is extremely active then garbage collection may not be able to find
enough to prevent an issue. This is usually a rare occurrence but can happen.

Whilst garbage collection is an advantage in terms of memory management it has a darker side. When garbage
collection is triggered, all activity within the java virtual machine freezes for the garbage collection to do its work
efficiently. In most cases, the amount of time it freezes is short but if the garbage collection activity is frequent then
the garbage collection time can impact performance of a java application.

Whilst the default garbage collection regime shipped with the versions of the java virtual machine are adequate for
most sites, it is possible to tweak the garbage collection tolerances and algorithms that are used to speed up the
garbage collection process or make sure that garbage collection is not called as often as it may be using the
defaults.

The key here is to ensure that the whilst you cannot avoid garbage collection happening you want to minimize its
impact by minimizing the frequency it occurs, and when it occurs, minimizing it's impact.

There are a few guidelines to consider when tuning java garbage collection for applications:
» Consider using Parallel Garbage Collection – In later versions of java, the ability to garbage collect using
multiple CPU's in parallel was introduced to minimize the time garbage collection was executing.
» Tweaking Garbage Collection tolerances – By default the java virtual machine has a specific set of tolerances
for initiating garbage collection. This may be able to be tweaked to decrease the frequency and duration of the
garbage collection. The java virtual machine documentation supplied by your virtual machine vendor will outline
the options to set this.
» Tweaking memory parameters – By default java allocates regions of memory within a java virtual machine to
manage the lifecycle of classes and objects. If a product is using more of one region over another, this can force

35 - Technical Best Practices - Oracle Utilities Application Framework


garbage collection to occur. Each vendor of the java virtual machines offers s set of options to tweak the garbage
collection algorithm and allocation of memory regions. Refer to Memory Management in Java for more information
about java memory.
» Watch your CPU usage – When tweaking the garbage collection parameters or memory settings you need to
watch the amount of CPU used at the time of garbage collection. This will ensure that while solving one problem,
the impact of garbage collection on performance, does not introduce a new issue, excessive CPU.
For more information about garbage collection and tuning it, refer to Java Garbage Collection Tuning Guide. Also
review the Garbage collection information supplied by the vendor of the java virtual machine for specific advice for
your platform.

Security Configuration
One of the features of the product is the ability to configure the authentication component of security. As with other
J2EE based applications the product supports the standard set of settings and configurations inherent in the J2EE
standard. The web.xml file controls the behavior of the authentication method used in the login-
login-config section
of the configuration file.

There are a number of settings and they have additional configuration settings that must be adhered to:

Setting Comment

BASIC This setting uses the operating system login dialog as the product authentication dialog. The setting must also
indicate the realm-
realm-name used. This setting is useful for basic environments and also can be used by some Single
Sign On solutions that detect this setting.

FORM This is the default setting where the product (or implementation) supplies a JSP/HTML based login dialog (and
error dialog) in the form-
form-login-
login-config section. This is the most common option for the product. Implementers
can implement their own forms according to site standards if desired.

CLIENT-
CLIENT-CERT This is more advanced two way SSL based authentication. This is typically used for Single Sign On
implementations and additional settings are typically required, including setting up SSL, to implement secure
authentication using certificates. Refer to CLIENT-CERT support for more details.

For a more in-depth discussion of this topic refer to the Oracle WebLogic security documentation.

Typically most customers use the default FORM based login option.

Shortcuts When Processing Templates


When updating the configuration settings on a server via the ENVIRON.INI,
ENVIRON.INI the initialSetup[.sh]
initialSetup[.sh] utility is
needed to reflect all the changes into configuration files via templates. This will automatically update ALL files. It is
possible to shortcut this process and update individual configuration files if the changes are isolated to specific files.

The following commands can be used:

perl processTemplate.plx –t <list of templates>


where
<list of templates> - List of templates10 with comma delimiters to process
For example:

perl processTemplate.plx -t jarservice.xml.template,spl.properties.template

10 A full list of templates is listed in the Server Administration and Batch Administration Guides for the product.

36 - Technical Best Practices - Oracle Utilities Application Framework


Note:
Use of processTemplate.plx can lead to mismatched configuration files with EAR/WAR files. If any
initialSetup[.sh]
change affects the EAR/WAR file it is recommended to run initialSetup [.sh] utility to apply the
configuration changes.

Using Oracle JRockit


Note:
For products supporting COBOL extensions, Oracle JRockit can only be used for the online. The
CHILD_JVM_JAVA_HOME must be set to Oracle/Sun JDK.

Note:
Oracle JRockit is only available on a subset of Oracle Utilities Application Framework platforms. Refer to
the JRockit OTN web site for more information.

Note:
These instructions are for Oracle Utilities Application Framework V4.0.x and Oracle Utilities Application
Framework V4.1.x only.

Note:
Oracle JRockit has been replaced by Oracle HotSpot JDK for Java 7 and above. Oracle JRockit is not
supported in Oracle Utilities Application Framework V4.2.x and above.

It is possible to use the Oracle JRockit for the product using the following configuration process:

» Install the latest version of JRockit JDK Real Time and optionally, JRockit Mission Control as per the JRockit
installation guide on the machine running the environment.
» Logon to the machine running the environment and execute the splenviron command to set the environment
variables.
» Shut down the environment to make the changes.
» Execute the configureEnv[.sh] –i option to set the installation options.
JAVA_HOME)
JAVA_HOME to the location of the JRockit installation.
» In option 1, change the Web Java Home Directory (JAVA_HOME
» Execute the initialSetup[.sh] to reflect the changes in the various files.
» If native installation is being used, the EAR files for SPLService and SPLWeb need to be redeployed.
» Change the startWebLogic[.sh] script (in embedded it is located in splapp;
splapp in native it located under the
domain location) to add the following line in red. For example:
set JAVA_OPTIONS=%JAVA_OPTIONS% -
Dweblogic.system.BootIdentityFile=…\splapp\security\boot.properties

37 - Technical Best Practices - Oracle Utilities Application Framework


set JAVA_OPTIONS=%JAVA_OPTIONS% -
Xmanagement:authenticate=false,autodiscovery=true,ssl=false

Note:
The red line above is an example for testing purposes only. Alter the options depend on your security
requirements.

» Restart the server.


Oracle JRockit is now enabled and Oracle JRockit Mission Control can be used to monitor the JVM.

Accessing JMX for Oracle WebLogic


Note:
Refer to Accessing WebLogic MBeans with JMX for information about accessing JMX.

Oracle WebLogic supports an extensive JMX interface to expose runtime statistics. To enable this facility the
following configuration process should be performed:

» Enable the JMX Management Server in the Oracle WebLogic console at splapp  Configuration  General 
Advanced Settings option. Enable both Compatibility Mbean Server Enabled and Management EJB Enabled (this
enables the legacy and new JMX interface). Save the changes Restart the server to reflect the change.

Note:
For Oracle Utilities Application Framework V4.2.0.0.0 and above , this facility is enabled by
default.

In the startup of the Oracle WebLogic server in the $SPLSYSTEMLOGS/myserver.log (or


%SPLESYSTEMLOGS%\
%SPLESYSTEMLOGS%\myserver.log on Windows) you will see the BEA-149512 message indicating the Mbean
servers have been started. The message will indicate the JMX URL that can be used to access the JMX Mbeans.
The URL is in the format:

service:jmx:iiop://<host>:<port>/jndi/<mbeanserver>
where:

<host> Oracle WebLogic host name

<port> Oracle WebLogic port number

<mbeanserver> Valid Values:

weblogic.management.mbeanservers.runtime

weblogic.management.mbeanservers.edit

weblogic.management.mbeanservers.domainruntime
Ensure that you execute the splenviron[.sh] utility to set the appropriate environment variables for the desired
environment.

Execute the following jconsole command to initiate the connection to the JMX Mbean server

Windows:

38 - Technical Best Practices - Oracle Utilities Application Framework


jconsole -J-
Djava.class.path=%JAVA_HOME%\lib\jconsole.jar;%WL_HOME%\server\li
b\wljmxclient.jar -J-
Djmx.remote.protocol.provider.pkgs=weblogic.management.remote
Linux/Unix:

jconsole -J-
Djava.class.path=$JAVA_HOME/lib/jconsole.jar;$WL_HOME/server/lib/
wljmxclient.jar -J-
Djmx.remote.protocol.provider.pkgs=weblogic.management.remote
To connect to the JMX classes, use specify the Remote process URL from the previous steps (i.,e.
service:jmx:iiop...) using the credentials specified for the Oracle WebLogic console.
service:jmx:iiop

The JMX classes are now available.

For example:

Figure 12 – Example Oracle WebLogic metrics

Refer to the Oracle WebLogic JMX MBean Reference for more information.

UCP JMX Interface

39 - Technical Best Practices - Oracle Utilities Application Framework


Note:
This facility is available in Oracle Utilities Application Framework V4.2.0.0.0 and above.

Note:
For backward compatibility purposes this setting is disabled.

Note:
If your site is using Data Sources, then this section is not applicable.

The JMX interface for the product can be extended even further by exposing the UCP connection pooling metrics to
track statistics for database connections. To implement this facility, the following process should be implemented:

» In the environment to implement this change, copy the hibernate.properties.web.template to


cm.hibernate.properties.web.template
cm.hibernate.properties.web.template within the templates directory.
» Edit the cm.hibernate.properties.web.template and change the hibernate.ucp.jmx_enabled from
false to true. For example:
hibernate.ucp.jmx_enabled=true
» Execute the initialsetup[.sh] –t option to apply the custom template and replace the
hibernate.properties $SPLEBASE\
within the $SPLEBASE\etc\
etc\config\
config\service (or for Windows use
$SPLEBASE/etc
$SPLEBASE/etc/config
etc/config/service
/config/service).
/service
» Restarting the online product will enable the statistics.
As the UCP is in the server path for the J2EE Web Application Server, the UCP metrics are accessible from the
J2EE JMX capability. Once it is connected the UCP JMX statistics are available. For example:

40 - Technical Best Practices - Oracle Utilities Application Framework


Figure 13 – Example UCP metrics

Refer to the UCP Admin java documents for more information on the statistics tracked.

Using Java Mission Control with Oracle Utilities Application Framework


Note:
This facility is only available in Oracle HotSpot SDK Verson 7u40 and above and is only compatible with
Oracle Utilities Application Framework V4.2.0.2.0 and above only.

Note:
This facility is not appropriate for Web Services tracking.

Note:
JMX for online and batch MUST be enabled for this facility to work. Refer to the Server Administration

41 - Technical Best Practices - Oracle Utilities Application Framework


Guide for more details.

The Java Mission Control provides developers with low level diagnostics for Java programs. This facility is useful for
diagnosing performance and coding issues in the implementation process. It is possible to use this facility, with the
right version of Java, for online and batch tracking at the Java level. It also can be used with Oracle Enterprise
Manager to perform Live JVM Thread Analysis using the Oracle WebLogic Enterprise Edition Management Pack for
Oracle Enterprise Manager.

To use this facility there are two basic options11 that must be added to the command line for Oracle WebLogic and
the Oracle Coherence startup. These options are:

-XX:+UnlockCommercialFeatures -XX:+FlightRecorder
These flags can be added to the product configuration using any of the following techniques:

Scope Installation Instructions

Online Embedded Add the above options to the Web Application Additional Options command line as documented
above using configureEnv[sh] -a in option 51.

Online Native Either:


Add the above options to the setUserOverrides[.sh] script within Oracle WebLogic in the
USER_MEM_ARGS variable (as documented in Using setUserOverrides).
Or
Add the above options to the server java options definition as outlined in the Native Installation
(Doc Id: 1544969.1) or Implementing Oracle ExaLogic and/or Oracle WebLogic Clustering (Doc
Id: 1334558.1).

Batch - Add the above options to the threadpool.*.be templates used by bedit using the
com.ouaf.batch.jvmoptions variable as outlined in the Server Administration Guide.

Once connected the Flight Recorder and Java Mission Control features are available via the JMX URls outlined in
the Server Administration Guide and Eclipse or Oracle Enterprise Manager.

Overload Protection
Note:
It is recommended to use the Oracle WebLogic console or WLST to maintain work managers or use the
overload protection features of Oracle WebLogic. Customers using embedded installations can set the
overload protection for the product server in the Installation of the default domain. Refer to the Installation
Guide supplied with the product for details.

By default in Oracle WebLogic the domain uses a global work manager to manage connections. The issue with the
default global work manager is that the setting is effectively unlimited connections. In non-production, this value is
never reached ordinarily but it can cause issues in Production platforms. If the global default is used then the server
may experience an out of memory condition before hitting the global connection limit. In Oracle Utilities Application
Framework implementations there are a number of ways of addressing this:
» Overload Protection – Oracle WebLogic contains an overload protection setting which tells the server what to do
in an overload situation. Typically there is a setting to handle out of memory conditions with two settings no-
no-
action (default) or system-
system-exit.
exit In a production, where high availability is typically configured, it is

11 There are other options that are available that can be used to further filter the result sets. Refer to Running Flight Recorder for options.

42 - Technical Best Practices - Oracle Utilities Application Framework


recommended to set this value to system-
system-exit.
exit For more information about overload protection refer to
Avoiding and Managing Overload.
» Using application scoped Work Managers – It is possible to implement an application scoped work manager
configuration, allocated to product servers, to implement tolerances to avoid out of memory conditions by setting
constraints low enough to avoid out of memory conditions. The value of the constraints will depend on how busy
your end user traffic is in relation to the individual servers. The application scoped work manager configuration
may use a Max Threads Constraint or a Capacity Constraint12. For more information about work managers refer
to Using Work Managers to Optimize Scheduled Work and Implementing Work Managers.

Note:
It is recommended not to use Execute Queue functionality with Oracle Utilities Application Framework as
that is designed for legacy support. Use of work managers is recommended as an alternative to the
Execute Queue functionality.

» Stuck Thread Handling – As part of the work manager definition it is also possible to specify server specific
stuck thread handling, which directs how WebLogic should handle stuck threads and the tolerances affecting the
condition. It is possible to reuse the server definitions of stuck thread tolerances (default), specify whether stuck
threads are ignored or specify work manager specific tolerances. For more information about stuck thread
handling using work managers refer to Using Work Managers to Optimize Scheduled Work.

Resource Management
By default, resources in the product are set to the default tolerances supplied with Oracle WebLogic and the Oracle
Database. Whilst these defaults, usually unlimited access, may be appropriate for non-production environments, it
may not be appropriate for production environments.

There are a few resource management capabilities that can be used by the product to set appropriate resource
limits:
» Work Manager Support – It is possible to setup Application Scoped Work Managers to specify constraints to
prevent out of memory or overload issues on product servers. These control client connections to the servers to
ensure optimal resource usage on the product servers. Refer to Overload Protection for more information about
this capability.
» Database Resource Plan Support – It is possible to set and manage database resources at various levels using
the Oracle Database Resource Manager. This allows finite control at the database level to resources. For more
information about resource management, refer to Using the Database Resource Manager to Manage Database
Server Resources (Doc Id: 2067783.1) available from My Oracle Support and Managing Resources with Oracle
Database Resource Manager.
» Transaction Timeout Support – The Oracle Utilities Application Framework has a facility that allows global and
service transaction timeouts to be set to help limit resource usage. These provide a method of policing
transactions to operate within acceptable tolerance. Refer to Enabling Service Timings for more information about
this facility.
Setting these tolerances to match your business performance expectations and/or service level agreements will
depend on the traffic usage experienced for your site. Using the available monitoring facilities can help determine
tolerances.

Data Management Best Practices

12 These are the only two constraint types supported in the current release. Theoretically Minimum Threads Constraint is also supported but tends not
to be used by the majority of implementations.

43 - Technical Best Practices - Oracle Utilities Application Framework


Once a product has been put into product one of the issues that needs to be managed is the quantity of data that
accumulates over time. While storage is relatively cheap, as compared to the past, maintenance of an optimal
amount of storage is both cost effective and maintains a stable level of performance.

Data management techniques used with products varies according to the types of data stored within the product.

Respecting Data Diversity


One of the most important considerations for a site is to respect the diversity of the data contained in the product
where you are trying to manage the data from. Different types of data require different types of management.
Requirements for managing data are typically driven by business practices, industry practices or even government
legislation (typically driven by tax requirements).

Products are typically is divided into a number of data types and each of these data types needs to be managed in
the database for a varying length of time as the product typically has different uses for them. In most products the
data types can be categorized as follows:

Data Type Typical Composition Typical Management

Configuration Data (a.k.a. Administration Data driving the configuration of the Maintained by a subset of individuals. Kept
data) product (e.g. Menus, rates, security, indefinitely and only represents small part of
reference data etc). any database.

Master Data Data pertaining to customers/taxpayers Maintained by end users. Kept indefinitely but
such as personal records, addresses, can be driven by government legislation such
account information, contracts, etc) as privacy laws or industry rules.

Transactional Data Day to day data relating to any interaction Data is still active is retained for operational
or activity against the Master Data (e.g. reasons. Historical data is deleted or archived
Bills, Cases, payments, contacts etc). according to business rules or government
legislation.

The table above illustrates the various differences between the types of data and their usual data retention rules.
During an implementation and post implementation, you must be aware of the data types and then plan the data
retention rules accordingly.

Information Lifecycle Management


Note:
The Information Lifecycle Management capability is only available for selected Oracle Utilities Application
Framework based products. Refer to your product documentation to verify its validity.

Note:
This section is an introduction only. Refer to the ILM Planning Guide (Doc Id: 1682436.1) available from
My Oracle Support.

One of the most used techniques of managing data is Information Lifecycle Management. The goal of Information
Lifecycle Management is to minimize the storage costs of holding data but still making sure that it is appropriately
accessible to the business for business processes.

The fundamental concept behind information lifecycle management is that transactional data has an implied lifecycle
where it goes through a number of stages:

44 - Technical Best Practices - Oracle Utilities Application Framework


» Data is born when it is created within the product either directly, via an interface or as part of some process.
» Data is active within the business when it needs to be updated or at least available for update for business
reasons using a business process.
» At some stage the data becomes less active with occasional read only access needed by business processes.
» At a later stage the data becomes dormant where the business does not need access to that data any longer for
business processes.
The key to Information Lifecycle Management then is to design a cost effective storage solution around that lifecycle
that is appropriate for the stage the transaction data is in. There are a number of alternatives:
» Compression - Using the various compression options available in the database including Advanced
Compression and Hybrid Columnar Compression13.
» Partitioning - Splitting the transaction table into individual partitions to take advantage of tiered storage based
solutions and physically separating data by usage patterns.
» Dormancy - Removing data using transportable tablespaces and purge processes after it is no longer needed by
the business.

Data Retention Guidelines


One of the most common requirements that must be considered during an implementation and post implementation
is how long to retain data in the active production database. Even though disk space is becoming cheaper over time,
there is always a cost based limit to how much should be stored.

Typically the customer's business practices that dictate the amount of historical data stored in the database at any
time. Therefore there are a number of key factors that govern data retention:
» Government legislation – Most countries have a legal requirement to have information available in a computer
system. Typically this requirement separates how much should be active and how much should be retained in a
passive medium (e.g. archive or available in a backup format).
» Business requirements - There is usually a business requirement to work on historical data. For example the
business may need to be able to process financial data over a number of years. This requirement typically
dictates the amount of historical data kept.
» Physical capacity of the hardware – At the end of the day any machine used for any software has a physical
limit. This limit is usually based upon business requirement and cost to the business.
» Table Identifiers – All tables in the Oracle Utilities Application Framework based products have identifiers (some
have multiple). The physical key size can be an indicator of the limit of the records that can be kept. It should be
noted that most of the Oracle Utilities Application Framework based products have designed their key sizes to
cover the majority of expected data cases in the field.
» Audit requirements – Typically, each site will have some sort of auditing function, within the company or an
independent auditing firm. This auditing capability that will expect a certain amount of historical data, directly or
indirectly in the product, to adequately operate an audit. This requirement is usually forgotten by most sites until
they need it. During an implementation, or soon after, the audit requirements should be clarified and factored into
any data retention policy.
It should be noted that the product's themselves do not impose any particular data retention policy.

Data Retention tends to apply to specific data types only:


» Transactional data is subject to Information Lifecycle Management as it is the data that grows over time.
» Master data tends to remain in the database for the life of the system, even in a deregulated market, for fraud
prevention purposes. It is rarely managed using Information Lifecycle Management due to its low growth.

13 This option is only available for Oracle ExaData customers.

45 - Technical Best Practices - Oracle Utilities Application Framework


» Meta-Data is not covered by data retention policy as it needs to be there to make the product operate so is rarely
storage managed.
» Configuration data will vary, as it is wide ranging, but generally is also rarely storage managed.
In terms of their platform, customers should monitor the data growth to reach a decision about archiving, if they wish
to do so, or simply removing the data.

Typically the status of a record in the staging tables used for interfaces becomes Complete then it becomes
redundant data. The data will be reflected in the main product tables and is not required in the staging tables
anymore. Removal of completed records, on a regular basis, can have storage benefit as well as performance
benefit.

Removal of Staging Records


The product uses a staging concept for most of the major interfaces. This involves a process, known as Process X,
to load the staging tables and then a base product background process is run to validate and copy the valid staging
data into the relevant main tables. When records are loaded initially, the status of the records is set to Pending
indicating they are ready to process. Once the relevant base product background process processes them, then the
status is changed to either Completed (for valid records) or Error (for invalid records). Invalid records can be
corrected using the relevant staging online query to manually resolve the error.

This is summarized in the figure below:

Figure 14 – Staging Process Overview

It is assumed that completed staging records are no longer required, after a period of time, as the data they contain
has been reflected have been reflected in the main tables. There is no business reason to keep completed staging
records after they have been completed for long periods of time.

Regular cleanups of the staging tables to remove completed records will have great performance benefits on
interfaces. Successful sites run the provided purge jobs to improve performance and reduce disk space usage.

To decide when to run these purge jobs and what parameters to pass to them the following is recommended:
» Work out with the business at the site how long they wish to retain the number of completed records. You can
stress to them that NO important data is lost in purging completed records as their data is reflected in main tables.

46 - Technical Best Practices - Oracle Utilities Application Framework


The value is used for the NO-
NO-OF-
OF-DAYS batch parameter passed to the job. The value is the number of days not
the number of business days (e.g. A value of 14 for NO-
NO-OF-
OF-DAYS means 2 weeks).
» For the To Do Purge job, there are additional parameters to decide the specific To-Do type to purge or ALL (DELDEL-
DEL-
TD- TYPE-CD and DEL-
TD-TYPE- DEL-ALL-
ALL-TD-
TD-SW).
SW Work with the business to decide if this job is to be run once (for all To Do
types) or multiple times for each To-Do Type. Successful customers run it to delete all To Do types to reduce the
number of jobs to run.
» Decide the frequency based upon data growth of each table. Ideally these purge process should be run each
business day at the end of the nightly batch schedule to keep the optimum but should be run once a week at a
minimum.

Partitioning
One of the most popular data management techniques is the use of partitioning on tables. Partitioning enables
tables and indexes to be split into smaller, more manageable components.

Partitioning allows a table, index or index-organized table to be subdivided into smaller pieces. Each piece of
database object is called a partition. Each partition has its own name, and may optionally have its own storage
characteristics, such as having table compression enabled or being stored in different tablespaces. From the
perspective of a database administrator, a partitioned object has multiple pieces which can be managed either
collectively or individually. This gives the administrator considerably flexibility in managing partitioned objects.
However, from the perspective of the product, a partitioned table is identical to a non-partitioned table; no
modifications are necessary when accessing a partitioned table using SQL.

Partitioning has known benefits:


» Divide and Conquer - With partitioning, maintenance operations can be focused on particular portions of tables.
For example, a database administrator could back up a single partition of a table, rather than backing up the
entire table. For maintenance operations across an entire database object, it is possible to perform these
operations on a per-partition basis, thus dividing the maintenance process into more manageable chunks.
» Parallel Execution of SQL – Most databases will sense that the table is partitioned and run SQL statements
(including SELECT and INSERT statements) in multiple threads. Each of the partitions can be thought of as an
individual table and the database uses this.
» Pruning – Queries operating on one partition can run substantially faster due to reduced size of the data to
search.
» Partition Availability - Partitioned database objects provide partition independence. This characteristic of
partition independence can be an important part of a high-availability strategy. For example, if one partition of a
partitioned table is unavailable, all of the other partitions of the table remain online and available; the product can
continue to execute queries and transactions against this partitioned table, and these database operations will run
successfully if they do not need to access the unavailable partition.
When using partitioning you should ensure that major processes accessing the table do not cross partition
boundaries. Crossing from one partition to another can cause slight delays as physically the table has been
separated into individual files per partition. This situation may be avoided when designing the partitioning regime for
the table.

The key to success to partitioning is recognizing which tables are candidates for partitioning and what partitioning
scheme to use. Partitioning must be planned and designed into a database to ensure that the partitioning regime is
optimal for your products.

The ideal candidates for partitioning are large tables with a small number of indexes. The benefits of partitioning are
optimal for large tables rather than applying the principles across all tables. The minimal number of indexes is a
criterion to minimize the likelihood of crossing partition boundaries in SQL.

47 - Technical Best Practices - Oracle Utilities Application Framework


Once the tables are chosen to be partitioned then the next step is to decide the number of partitions to implement.
The rule of thumb is to choose the number of partitions so that any SQL that accesses the table using the indexes
will minimize crossing partition boundaries. If your product is multi-threaded then each thread of the process needs
to remain within a partition. In this case the number of partitions should be equal to the number of threads (or a
divisor). For example, if a major process runs in 10 threads then the number of partitions could be 10, 5 or 2. Each
of the numbers ensures that each thread stays within a partition.

Once the number of partitions is chosen the next step is to decide which partition scheme you can use. Database
vendors have implemented numerous ways of dividing a table into partitions. Each of these schemes (and
sometimes combination) tells the database how to split the data into the various partitions as well as how to access
the partitions. The most common partitioning scheme used is known as range partitioning where a range of values
(index based) is used to designate the partition a record is placed within. Refer to the partitioning documentation
provided by your database vendor for details of all the different schemes that can be used to partition your table
data.

Table partitioning represents the easiest method of data management and is usually the first data management
technique used before other techniques are considered.

Compression
Note:
Database level compression varies from one database version to another. In some cases, it is included as
an optional component of the database and in other cases, it is a separate option that must be obtained
from Oracle.

A technique that is starting to emerge from the database vendors is compression of data. This can be done at a
database level (global) or a table level and typically requires no changes to a product to implement.

As the data is stored and retrieved it is compress and decompressed before passing back to the product. As far as
the product is concerned it is unaware that the data is compressed or not. This appeals to database administrators
as they can experiment with compression without the need to involve the product developers.

Database systems have not heavily utilized compression techniques on data stored in tables. One reason is that the
trade-off between time and space for compression is not always attractive for databases. A typical compression
technique may offer space savings, but only at a cost of much increased query time against the data. Furthermore,
many of the standard techniques do not even guarantee that data size does not increase after compression.

Over time, database vendors have addressed the trade-off by implementing unique compression techniques. It has
come to a stage where virtually no negative impact on the performance of queries against compressed data; in fact,
it may have a significant positive impact on queries accessing large amounts of data, as well as on data
management operations like backup and recovery. Each database vendor will supply guidelines to effectively use of
INSERT’s, UPDATE’s
compression to minimize any overhead for all SQL statements (including INSERT UPDATE etc) and which tables
are the best candidates for compression.

Database Clustering
One of the more advanced features that have emerged as a valid data management technique is the ability for
databases to be clustered. This is a relatively new technique for data management, as most people associate
clustering with availability rather than management of data volumes.

48 - Technical Best Practices - Oracle Utilities Application Framework


Database clustering provides the ability for a database to be spread on more one machine but seem to the product
as a single database. The database management system manages all the synchronization and load balancing of
transactions automatically. It was designed to support the availability of the database in case of a hardware failure
in one of the nodes of the cluster.

Experience within the industry has shown that using the clustering capabilities can also improve performance when
large amounts of data are involved. Logically clustering enables the database to access more power and spreading
the workload across machines.

This technique is applicable where the volume of the data is impacting database performance. One of the major
symptoms is CPU usage on database is consistently high, no matter what tuning is performed at the database and
product level. This implies that the database is CPU bound and while there may be an option to add more CPU’s to
the server, considering clustering the data becomes a viable alternative.

While implementing clustering has been made progressively easier with each release of the database management
system, implementing clustering must be planned using the guidelines outlined by the database vendor. Refer to the
documentation provided on clustering by your database vendor.

Backup and Recovery


One of the most critical components of the implementation and ongoing support of product at a site is the ability to
backup the data and software to ensure business continuity. Equally important is the ability to easily restore that
data if the need arises.

Typically a site will have a preferred regime and set of tools that is used to achieve a backup and recovery of all
systems that the site. When implementing product this regime and set of tools is typically reused to cater for the
products and business needs.

When considering a backup regime for product the following should be considered:

» There is nothing within product technically that warrants a particular approach to Backup and Recovery. Most
customers continue to use their existing approaches.
» There is nothing within product technically that warrants a particular backup and recovery tool. Most customers
use the native tools provide with their platforms, for cost savings, but some customer have purchased additional
infrastructure to take advantage of faster backups/recoveries or additional features provided by such tools.
» If your site does not have a backup regime already the following can be considered default industry practice:
» Use Hot Incremental backups on production during the business week to reduce outage times.
» Do a FULL backup (Hot or Cold) once a week at least to reduce recovery times.
» Verify backups after they are taken to reduce risk of delayed recoveries.
» On non-production, consider either the same regime as production or consider regular FULL backups at peak
periods in an implementation.

Client Computer Best Practices


Even though product is browser based there are some practices on the client machine that affects performance.
This section outlines the practices about the client machine that have proved beneficial.

Make sure the machine meets at least the minimum specification


As part of the installation documentation for each installation of product, the minimum and recommended hardware
for the client is specified. Typically Oracle takes the following into consideration when specifying this information:

49 - Technical Best Practices - Oracle Utilities Application Framework


» The minimum and recommended hardware as specified by Microsoft for the operating system used for the client.
» A typical set of other applications running on the machine, typically Office style applications.
While all care is taken in specifying the hardware will cost in mind, experience has shown that customers need to
review the specification in light of their internal standards.

Internet Explorer Compatibility Mode Settings


The Oracle Utilities Application Framework is compatible with a wide range of versions of Internet Explorer. For
releases Oracle Utilities Application Framework V4.2.x and below the URL used for the product must be defined on
compatibility mode for backward compatibility. There are a number of ways to define the URL as using compatibility
mode:

» At runtime the user can add the URL at first login using the browser compatibility mode settings. This is explained
in an article on the Microsoft site.
» If using Internet Explorer 11 then it is possible to set the compatibility from the menu as explained in an article on
the Microsoft site.
» If sites want to implement automatic group policies to define the product URL's using compatibility mode then
refer to the Enterprise Mode article on the Microsoft site.
In Oracle Utilities Application Framework V4.3.x and above, compatibility mode is no longer required.

Popup Blockers
The browser interface to the product uses popup windows for initial searches on some transactions. Commercial
and inbuilt pop blockers may interfere with the display of these windows. It is recommended to provide overrides for
these blockers for the relevant URL’s used for the environments used onsite.

The popup blocker may block the initial popup search windows on some transactions but may not affect subsequent
searches that are explicitly requested by the end user.

Internet Explorer Caching Settings


Note:
The Oracle Utilities Application Framework supports a wide range of browsers and their respective
versions. Refer to the Installation Guide for details of browser support.

The Internet Explorer settings used must match the recommended settings as outlined in the product Installation
Guide, which includes:

» Internet Explorer cache settings should be set to Automatically NOT Every visit to EVERY page for production
use. Certain elements on the browser user interface pages are cached on the client for performance reasons.
Incorrect setting of the cache settings in Internet Explorer will increase bandwidth usage significantly and degrade
performance, as screen elements will be retrieved on each rather than from the cache. The correct setting is
shown below:

50 - Technical Best Practices - Oracle Utilities Application Framework


Figure 15 – Example Cache Setting

» Java script must be enabled. The product framework uses javascript to implement the browser user interface.
» HTTP 1.1 supports must be enabled. If you use a proxy to get to the server, then also check Use HTTP 1.1
through proxy connections.

Figure 16 – HTTP 1.1 Settings

Clearing Internet Explorer Cache


Between upgrades, it is advisable to manually clear the Internet Explorer cache to remove any elements that may be
still in the cache that are not applicable to the new version. This is a rare situation but sometimes clearing the cache
can ensure corrections in caching or inappropriate elements left over from upgrades from being incorrectly
displayed.

Optimal Network Card Settings


Typically the manufacturers of NIC devices provide a number of configuration settings to allow further optimization of
network transmit and receive settings. Typically the defaults provided with the card are sufficient for the needs of the
network traffic transmitted and received by the machine.

It may be further optimal to investigate whether changing the settings can improve performance at your site
(particularly the number of network buffers used). Altering the settings may improve performance but also may
adversely affect performance (due to higher CPU usage). Typically the majority of customers use the default
settings provided by the manufacturer.

Network Best Practices


The product ships data a network between the clients and the various components of the architecture. This section
outlines some of the practices to optimize the network elements of a configuration.

Network bandwidth
One of the most common questions asked about the product is the network footprint of the Oracle Utilities
Application Framework based product. This question is difficult to answer precisely for a number of reasons:
» The amount of data sent up and down the network is dependent on how much change is done by an individual
user at the front end of the product. Only the elements changed by the end user are transmitted back to the
server. The more the user changes the more the data is transmitted. Given the numerous possible permutations
and combinations for data changes at any given time, this can be hard to estimate.
» The Oracle Utilities Application Framework supports partial object faulting. This means the framework only sends
data to the client that is being displayed. In screen with more than one tab, the framework only sends the data for
the tabs that are accessed by the end user. This means only part of the overall object required by the screen.
Most users tend to operate on a small number of tabs but this can vary from transaction to transaction.
» All transmission between the client and server are compressed using HTTP 1.1 natively supported compression.
This can reduce the actual size of the data transmission considerably depending on the content of the changes.

51 - Technical Best Practices - Oracle Utilities Application Framework


» Screen data is cached on the client machine that can be reused. The product takes advantage of the caching
facilities in the HTTP 1.1 protocol and the browser caching functionality. For example, screen definitions and
graphics are stored on the client machine to reduce network footprint. Upon every transmission of a screen
element the data in the cache is tagged with an expiry date to indicate the life of the element in the cache. Use of
client side caching can reduce the network traffic considerably with some customers reporting up to 90%
reduction in network traffic when this caching is enabled.
» To provide an estimate for the network footprint, the range between 10-200k, on average, per transaction is
quoted to adequately cover all the aspects outlined above. This value has been based upon experiences with
customers.
It is possible to track network bandwidth using a log analyzer against the W3C standard access.log produced by
your Web Application Server. Refer to the Performance Troubleshooting Guideline Series (Doc Id: 560382.1)
whitepapers available from My Oracle Support for more information about this log.

Ensure legitimate Network Traffic


One of the major factors on performance is the amount of legitimate traffic on the network. The traffic to and from
product shares the bandwidth with all other traffic on the network. If there is any network congestion than all
transactions from all network-based applications will be adversely affected.

Some customer sites have found that traffic that is not legitimate can adversely affect network performance. Traffic
that is considered not legitimate includes:
» Traffic generated from viruses and Trojans – There are a plentiful number of viruses and Trojans in the
general Internet network that can cause bandwidth issues. Most sites have regular virus protection to minimize
the impact to your network but not all. While it is not a requirement within product to have such protection, the
industry in general recognizes the need for such protection.
» Unauthorized large transfers – Large transfers of data can adversely affect performance as it can soak up
bandwidth if the transfer is not configured correctly. There have been instances of large FTP transfers slowing
down traffic on lower bandwidth networks.
Ensuring that only legitimate traffic is on a network can provide greater bandwidth for all applications (including
product) and improve consistency.

Regularly check network latency


In a network, latency, a synonym for delay, is an expression of how much time it takes for a packet of data to get
from one designated point to another. In some usages, latency is measured by sending a packet that is returned to
the sender and the round-trip time is considered the latency. The greatest impact on performance is inconsistency
latency.

The latency assumption seems to be that data should be transmitted instantly between one point and another (that
is, with no delay at all). The contributors to network latency include:
» Propagation - This is simply the time it takes for a packet to travel between one place and another at the speed
of light.
» Transmission - The medium itself (whether optical fiber, wireless, or some other) introduces some delay. The
size of the packet introduces delay in a round trip since a larger packet will take longer to receive and return than
a short one.
» Router and other processing - Each gateway node takes time to examine and possibly change the header in a
packet (for example, changing the hop count in the time-to-live field). This is a common cause of network latency.
» Other computer and storage delays - Within networks at each end of the journey, a packet may be subject to
storage and hard disk access delays at intermediate devices such as switches and bridges.

52 - Technical Best Practices - Oracle Utilities Application Framework


Minimizing latency or latest ensuring consistent latency is the goal of most of the product sites. A discussion of
latency and how to measure it is contained in the whitepaper Performance Troubleshooting Guideline Series (Doc
Id: 560382.1) whitepapers available from My Oracle Support.

General Networking Guidelines


One of the common areas of configuration issues is setting up incorrect networking for the product. In the past,
networking was simple with host resolution being straightforward. Over the last few years, more complex domain
resolution technologies, proxies, firewalls, demarcation and virtualization have complicated networking to the stage
that simple configuration will not always guarantee successful networking. To help minimize issues with the product
the following guidelines need to be followed:

The product infrastructure such as the J2EE Web Application Server and Java itself needs access to the hosts and
port named in the configuration. When specifying the host name in configuration files, ensure the host housing that
component can connect (directly or via name resolution) to the host specified (even it is the local host). The table
below outlines what each component needs access to:

Component Needs access to via Networking

14
Web Application Server (including Web Database Server
Services) and Business Application
Server

Batch Database Server

Inbound Web Services Database Server

Ensure ports are available and unique for the host as defined in your firewall. Inability to connect to ports will result
in a failed startup.

If there are issues then consider using localhost as your hostname in the configuration for the relevant
components (mainly Web Application Server and Business Application Server). If using localhost consider
installing the loopback adapter for your operating system. Use of the loopback adapter and localhost is highly
recommended if using dynamic server addresses and/or dynamic server names (such in virtualization)15.

When using multiple network connections, ensure the product uses the correct network connection(s) to operate.

If using CLUSTERED mode in batch, ensure that the multicast protocol is enabled; the configured multicast address
and port are available through your firewall and networking configuration.

Web Application Server Best Practices


The Web Application Server is used by product to serve the pages to the client and contains a control data cache.
There are a number of practices that sites find useful for maintaining the health of the Web Application Server.

Make sure that the access.log is being created


The access.log contains useful information that can be used for tracking bandwidth and usage patterns to make
changes to configuration.

14 The database connection was used to load cache data quickly at startup. In Oracle Utilities Application Framework V4.1, this is not required as cache
loading is performed via the Business Application Server (via Patch 11900153 ).
15 This is recommended for most Oracle products.

53 - Technical Best Practices - Oracle Utilities Application Framework


One of the key log files for traffic analysis is the access.log.
access.log This is a log generated by every hit on the system
from the end users. Every element of the screen is logged, asynchronously, including the time and userid. This log
must be configured/enabled to be generated by the configuration. Refer to the Web Application servers
documentation on how to enable this log to be generated.

The log is generated in W3C common log format and can be analyzed by third party log analyzers for further
analysis. A full description of the log, it usefulness and the log analyzers that can read the log are documented in the
whitepaper Performance Troubleshooting Guideline Series (Doc Id: 560382.1) whitepapers available from My
Oracle Support.

Some customers use the log for various purposes:

» It is possible to track errors and trends from the log using the log analyzers.
» It is possible to parse the log at a low level and determine the number of concurrent users and the users that have
used the system (and interestingly conversely who has NOT used the system).
» It is possible to track flows of individual sessions, known as click streaming, to track the screens and data used for
the screens.
» It is possible to determine the criteria used by users for searches. This is useful for detecting wildcard searching.
This log is useful but it is large so needs to be managed as suggested in Backup of Logs.

Examine Memory Footprint


One of the common experiences for ALL the J2EE Web Application Servers that product runs upon is that there
seems to be a Java Virtual Machine (JVM) limit on exactly how many concurrent online users a server will support.
Typically the experience has been that between 300-400 concurrent users are served by each instance of a JVM.

There are a number of techniques that are available to maximize this:


» Increasing the java memory parameters used for the JVM – This can be a configuration setting change or a script
change. Typically customers change the default settings to either 512MB, 1GB or 2GB per JVM.

Note:
In Oracle Utilities Application Framework V4.0 and above the JVM Options can be configured using
parameters. Refer to the Server Administration Guide provided with your product for more details.

» Creating additional servers within the instance to cater for the load.
Customers implement the latter suggestion in the following ways:
» Oracle WebLogic – A server entry for each new server is setup in the same Oracle WebLogic instance. The port
number can be the same (if the server is housed on a separate machine, known as clustering) or a different port
number (i.e. managed servers). A proxy is required to have a common connection point and to implement load
balancing. The memory footprint will be the same size for each server.
» IBM WebSphere – A new server is created within the WebSphere instance. The port number can be the same (if
the server is housed on a separate machine, known as clustering) or a different port number (i.e. managed
servers). A proxy is required to have a common connection point and to implement load balancing. The memory
footprint can be different for each server as that is held against the server entry within WebSphere.
Refer to Production Environment Configuration Guidelines (Doc Id: 1068958.1) whitepaper available from My Oracle
Support for more guidelines for production systems for JVM memory settings.

Turn off Debug

54 - Technical Best Practices - Oracle Utilities Application Framework


One of the development features of product is the ability to output useful debugging information as part of the
running of the application. While this information is useful for development environments it is not useful for
production or other performance sensitive environments.

Most customers change the debug setting to false to disable global debug information. It is possible to debug
individual transactions using the interactive debug facility.

Note:
This requires the Application Descriptors for all applications to be updated.

Load balancers
Oracle Utilities product customers who have more than one Application Server (physical or logical) must use a load
balancer to route the traffic evenly across the available servers. This load balancer can be either software based,
such as a web server with the appropriate plugin from the Application Server vendor, or a hardware based load
balancer (such as BigIp or other Layer 7 switches). Experience has shown that customers with a large number of
users (typically greater than 1500) tend to use hardware load balancers and smaller customers use software based
load balancers.

Using load balancers with product may not guarantee that load is evenly distributed, as the transactions do not have
a consistent resource load factor. The resource load factor for any product depends on the transaction type and the
data used in that transaction. For example, Search transactions are different from maintenance transactions and
resource usage of any search is dependent on the criteria used. Two executions of the same search will have
different response and resource usage profiles. Factored on top of that is the fact that the load on a server is a
summation of the all the transactions sent to it and that transactions vary from second to second, minute to minute,
hour to hour etc. The best you can do is

When installing a load balancer there are a number of algorithms for load balancing offered:

Algorithms Processing Comments

Round Robin Traffic is routed to each server on a rotating basis. This is the most common used by implementations
and the recommended setting.

Random Traffic is routed randomly to the servers. Not commonly used but may be used if traffic is
random enough.

Weighted Round-Robin Variation on Round Robin but allows support for Not generally used by implementations.
Allocation clusters where all servers are not the same size.

IP Address Traffic is routed using client IP address as the Has been used by customers but found that has
identifier where servers are assigned IP address limitations if used with virtual servers such as
ranges. Terminal services or Citrix.

Load Load factors of transactions are measured and Not used with product, as most load factors are
used to determine which server is best suited. inconsistent across transaction invocations.

Typically most customers use Round Robin as it is simple and given load is unpredictable can yield the best results.
Most customers understand that on some periods of time the load will not be balanced but on average the load is
relatively balanced. Remember that each transaction time is a function of how much data is changed

If using load balancing the following additional advice is applicable:

55 - Technical Best Practices - Oracle Utilities Application Framework


» Ensure that the load balancer does not interfere with Internet Explorer caching. This may result in a low cache hit
rate and increase bandwidth used.
» Ensure that the load balancer supports HTTP 1.1 headers to support compression.
» Ensure that the balancer supports Passive and Active Cookie persistence for session cookies. The Web
Application Server uses session cookies, for passing security credentials between the client and the server. The
load balancer must not compromise this facility.
» Ensure that the load balancer supports SSL persistence, if SSL is used, to ensure that encryption and decryption
are not compromised.

Preload or Not?
One of the startup features of product V1.5, and above, is the preloading of pages to save time. This preloading
process dynamically rebuilds the screen definitions from the XML meta data on startup. While this setting (by
default) enables the startup to pre-build them (instead of on first invocation) the startup of the Web Application
Server is delayed while the preload process is executing. The startup of the server is delayed until the last of the
screens is preloaded.

While the preloading of individual screens is very quick (measured in milliseconds) building all screens (1000+) can
cause significant delays to initial availability AFTER a restart. It is possible to influence the amount of preloading
using two parameters in the Web Application Descriptor called:

» preloadAllPages – This parameter affects how much preloading is taking place if it is preloaded. A value of
true preloads every screen for product. A value of false preloads screens off the Main menu only (the screens the
end users will be using).
disablePreload
» disabl ePreload – This parameter controls whether preload is performed or not at all. This parameter
affectively overrides the preloadAllPages parameter.
The effect of changing the parameters is outlined in the following table:

preloadAllPages disablePreload
disablePreload Effect

true true Pages are not preloaded at all. First invocation of the screen by the first user in
that screen loads the screen for all users. Can cause slight delay in initial screen
load for a single user but application startup is quicker

true false All pages are preloaded including administration and utilities menu. This setting is
not recommended for production as it delays Web Application Server startup
unnecessarily.

false True Pages are not preloaded at all. First invocation of the screen by the first user in
that screen loads the screen for all users. Can cause slight delay in initial screen
load for a single initial user but application startup is quicker.

false false Default. Pages on the Main menu are preloaded. This delays the startup of each
managed server but ensures screens are loaded quicker for ALL users.

Changing of this parameter affects availability rather than performance but should be considered if availability is
critical or you are not using all the screens in product.

It is recommended that the following settings be implemented if you do not use the entire product or you want
startup to be quicker:

preloadAllPages false

disablePreload true

56 - Technical Best Practices - Oracle Utilities Application Framework


Note:
This requires the Application Descriptors for all applications to be updated.

Native or Product provided utilities?


The Oracle Utilities Application Framework provides a set of basic utilities to manage (i.e. start and stop) the
product. While they are operational, they are not mandatory to use and some sites prefer to use the native utilities
provided by the Web Application Server vendors to start the product.

The reason that sites use the native utilities is that operations staff are more familiar with the native utilities, offer
more options and typically have an number of interfaces (not just command line). The Oracle Utilities Application
Framework provided utilities utilize the native utilities but use a subset of options only.

If the native utilities are used then the spl[.sh] utility should only be used to start and stop non-Web Application
Server components.

Hardware or software proxy


You will need to proxy connections if you use clustering or a number of managed servers. The choice of software or
hardware proxy is site specific. Large customers prefer hardware proxies and smaller ones software proxies.

If the implementation uses multiple servers then a proxy is needed to group the servers into a cluster or managed
configuration for load balancing purposes. There are two alternatives for such a proxy:
» Software – Each of the Web Application Server's supported by product provide a plugin to use a HTTP server
such as Oracle Traffic Director, Apache, Oracle HTTP server, Netscape or IIS as a proxy. Typically the plugin is
installed within the HTTP server and configured to define the server address and scheme of load balancing.
» Hardware – Increasingly the network router manufacturers are making hardware products that act as network
proxies or load balancers (known as Layer 7 load balancers). Hardware such as BigIp, WebSwitch, NetScaler etc
are increasingly performing load balancing within intelligent hardware. In this case, you simply configure the
servers and ports to a virtual address in the hardware and the load balancing scheme to use.
Customers with multiple servers are either using a hardware or software proxy. The larger scale customers favoring
hardware based solutions. The only thing to remember with a proxy is to make sure the following are taken into
account:

» The proxy server must support the IE caching scheme and not disable it or adversely affect its operation. This will
increase network through put.
» The proxy server must support session cookies. It must be configured to support the passing and processing of
session cookies as they are used for security tokens in product. Failure of this point will result in the security
dialog being displayed before EVERY screen.

What is the number of Web Application instances do I need?


One of the most common questions for an implementation is how many Web Application Servers do I need to
support the number of users that we have planned to be attached to the product for production? The answer to this
depends on the JVM you are using and its limitations.

Tests and experience has shown that the Java Virtual Machine has an internal limitation on the number of threads
that can be safely supported for transactions. This is not a sever limitation but represents the number of active
transactions (i.e. Users) that are supported on a Web Application Server at any time.

57 - Technical Best Practices - Oracle Utilities Application Framework


Tests have shown that this number varies between 300 – 500 users on a single Web Application Server JVM
instance. The number varies according to the JVM version used and the vendor that supplies the JVM. This number
represents maximum number of simultaneous active users hitting the Web Application Server at peak time.

The easiest method for finally determining the number of instances this will become is to divide the number of users
expected on the system, at worst case, by 300 and then round up to the next integer. For example, to support 750
users then you can specify 3 instances, to support 500 then you specify 2 instances etc. This method assumes
worst case. Regular monitoring of the actual number of connections will reveal whether this needs to be altered.

Configuring the Client Thread Pool Size


One of the first settings that will need to be configured for the product is the Client Pool Size on the Web Application
Server.

The thread pool manages the number of active connections to the Web Server (see figure below). A pool is used as
it saves resources by allowing reuse of connection threads instead of constantly creating and destroying threads.

Figure 17 – Client Connection Pooling

Each Web Server calls it a different name:

» Oracle WebLogic Server - Default Execute Queue/Threads


» IBM WebSphere Server - Thread Pool

Note:
For newer versions of Oracle WebLogic the thread pool is automatically managed by the Web Application
Server itself so the settings explained in this section may not apply. If you choose to manually manage the
connections in Oracle WebLogic then the advice does apply.

For purposes of this article we will call it thread pool.

The number of connections allocated in the pool is not the same as the number of users logged on. As product is a
stateless application the thread pool represents the number of users actually hitting the web server, not idle users.
Idle users in a stateless application consume little or no resources (actually the only resource an inactive user holds
is an open socket to the web server).

Therefore the size of the thread pool at any time is the number of ACTIVE users using the product. This is the peak
concurrent users from any channel. For the product, the number of users for the Web Server is dictated by this
formula

58 - Technical Best Practices - Oracle Utilities Application Framework


Number of Active "Users" = Number of expected peak active concurrent online users using the product + Number of
expected peak active concurrent calls to the product for Web Services using IWS via SOAP or REST + Number of
expected peak concurrent active calls to product from OSB16.

Web Services threads should be treated as users as well. This is because they typically share the same thread pool.

Figure 18 – Shared connection pooling

Thread pools are not static in size, they can grow and shrink in size depending on the traffic volumes experienced.
For product, thread pools have three attributes that need to be considered for sizing:
» Minimum Size - This is the size of the thread pool at Web Application Server startup time and the absolute
minimum if the pool is shrunk due to inactivity. For product, this typically represents the typical load on the Web
Application Server. In other words, the typical number of active users, on the system at any time. Most customers
either use the typical load for the day period or the typical load for after business hours. The latter is used where
sites want to minimize the resource usage as the pool is directly related to the amount of memory used by the
Web Application Server. The higher the minimum, the higher the memory usage for the server (even at rest).
» Maximum Size - This is the maximum size the thread pool can grow to within the Web Application Server
responding to the peak load of the traffic. For the product this typically represents the peak load expected on the
largest amount of traffic expected at any point in time. You know those days. If the maximum is set too low for the
load then end users will experience delays even getting a connection to the Web Application Server. Again the
value here is also tied to the memory usage. The higher the value, the higher the memory footprint at peak.
» Inactivity Tolerance - This value (usually in seconds) is the amount of time that a thread is not allocated to a
user before it is destroyed. This value is to reduce the pool size when it has grown about the minimum to detect
when there is a drop in traffic. Each Web Application Server will have a default and even a different name for it.
Typically customers leave the default but it is worth noting to see if it needs changing in the future.
How do you work out the pool sizes? The product does not have a specific recommendation as it varies according to
the volume of transactions but the following has been observed at customer sites:

16 Oracle Service Bus is only applicable when using the Oracle Utilities adapters for Oracle Service Bus.

59 - Technical Best Practices - Oracle Utilities Application Framework


For the minimum pool size, set the tolerance to the minimum number of active users for your site. This may be able
to deduced from testing but be aware that each transaction has different durations depending on the transaction
type (Maintenance, List and Search) and the actual data used in the transaction. Experience has shown us that if
you divide the number of defined users by three (3) then it may be a good rule of thumb. Several product customers
have noticed that only about a third of their users are active at any time. It should be pointed out that this rule of
thumb may not apply to your site but at least it may be used a guide.

As for the maximum, the only advice that is applicable is that the value should NOT equal the number of users you
have defined to the system. The value will vary according to the expected peak traffic experienced at the site.
Customers have used between 33-70% of the number of defined users as the setting for the maximum pool size. To
determine the optimum value for your site, it may be necessary to use trial and error.

Note:
Setting the minimum and maximum to higher than normal values may waste memory resources on the
Web Application Server and may cause performance degradation.

Once you have set the settings in your configuration you will need to monitor it to see whether you need to adjust
the minimums and maximums.

Customers have determined their own rules of thumb and get to the sweet spot after a few weeks or months of
testing or production.

Defining external LDAP to the Web Application Server


Note:
A detailed discussion of LDAP integration is available in the LDAP Integration (Doc Id: 774783.1)
whitepaper available from My Oracle Support.

Lightweight Directory Authentication Protocol (LDAP) is promoted as a means to leverage an organizational


directory as a principal registry for product user authentication. Therefore as part of the security setup of product you
may need to integrate to an onsite LDAP security repository. This is supported directly by the Web Application
server software and product does not require additional configuration.

Each of the Web Application Server vendors has specific instructions for integrating LDAP but the same process is
followed:
» Determine LDAP Query - The LDAP query to find the users is required to be determined. Even though LDAP is
a standard protocol determined by the IETF the repository structure itself will vary from vendor to vendor and
even the same vendors repository structure will vary from customer to customer as it can be altered to suit the
business model. This is the hardest part of the process, as the query needs to be correct else it will not return the
right records. It is akin to submitting the wrong SQL statement. There are tools, like adfind (for Microsoft ADS
for example). to help you with this process.
» Define LDAP settings to Web Application Server - Input the query and credentials to access the LDAP
repository. This will vary between Web Application servers but basically you need to define the following:
» The location (host) of the LDAP server(s)
» The port numbers for the LDAP server(s) (usually 389)
» The credentials used to read the LDAP server(s) (userid/password)
» The LDAP query to get the users (and sometimes groups for some Web Application Servers).
» (Optional) Cache settings to save data retrieved from the LDAP server for performance reasons.

60 - Technical Best Practices - Oracle Utilities Application Framework


Note:
Ensure that the LDAP you have specified contains a definition of the administration account you use to
start/stop/administrate product, else if you have made a mistake it may not be possible to restart the Web
Application Server. To reduce the risk of this happening, some sites define two repositories, one to the
LDAP server and one to the default security repository provided by the Web Application Server vendor as
a precaution. The latter is used to house the administration accounts you do not want to store in the
company LDAP.

» Restart to reflect changes - Restart the Web Application Server.


For more information see the following sites for your Web Application Server:
» Oracle WebLogic 12.2.1 - Configuring LDAP Providers
» Oracle WebLogic 12.1.1 - Configuring LDAP Providers

Appropriate use of AppViewer


The AppViewer is a component of product that displays meta-data in a more useable format. In past versions of the
product, it was preloaded with every product environment. Typically the information is used for design and
development work only.

To make the best use of the AppViewer the following advice is offered:

» The AppViewer is provided blank intentionally. It must be primed using a predefined set of Batch jobs. This will
take data from the meta-data (including ANY customizations) and generate it. You will need to run the jobs
regularly if you update the meta-data regularly and want the information reflected in the Application Viewer.

Batch Control Usage

F1-
F1-AVALG Generate AppViewer XML file(s) for Algorithm data (includes javadocs). This is code generation as well.

F1-
F1-AVBT Generate AppViewer XML file(s) for Batch Control. This is useful for run book information.

F1-
F1-AVMO Generate AppViewer XML file(s) for Maintenance Object data

F1-
F1-AVTBL Generate AppViewer XML file(s) for Table/Field data

F1-
F1-AVTD Generate AppViewer XML file(s) for To Do Type

» The introduction of the batch jobs, means you can decide which information is important for your site to display in
the AppViewer. For example, if you wish not to have To Do Types documented then you can omit that information
by not running that job. If you wish to populate ALL the information then you can use the genappvieweritems
command (or genappvieweritems.sh for UNIX).
Consider only populating the information in any design and development environments to save disk space. The
AppViewer can extend to a number of gigabytes if fully loaded.

Fine Grained JVM Options


The utilities provided with the Oracle Utilities Application Framework invoke a Java command line for the Web
Application Server, Business Application Server and batch components of the architecture. Whilst the memory
arguments and java options are standardized in the utilities, some sites have found that changing the defaults
provided allows for improvements in performance and stability.

In the past releases of the Oracle Utilities Application Framework prior to V4 this meant manually changing the
scripts provided as utilities, with the product, which can be overridden in upgrades. In Oracle Utilities Application

61 - Technical Best Practices - Oracle Utilities Application Framework


Framework V4 and above it is possible to set the memory requirements and additional JVM options from
configuration parameters. The following table lists the settings that can be altered using the configureEnv utility:

Configuration Setting Component Usage

ANT_ADDITIONAL_OPT ANT Additional java options for the ANT make tool.

ANT_OPT_MAX ANT Maximum memory size for ANT make tool.

ANT_OPT_MIN ANT Minimum memory size for ANT make tool.

BATCH_MEMORY_ADDITIONAL_OPT Batch Additional java options for Batch Threadpool workers.

BATCH_MEMORY_OPT_MAX Batch Maximum memory for Batch Threadpool workers.

BATCH_MEMORY_OPT_MAXPERMSIZE Batch Maximum permanent generation size for Batch Threadpool workers.

BATCH_MEMORY_OPT_MIN Batch Minimum memory for Batch Threadpool workers.

WEB_ADDITIONAL_OPT Web/Business Additional java options for J2EE Web Application Server.

WEB_MEMORY_OPT_MAX Web/Business Maximum memory for J2EE Web Application Server.

WEB_MEMORY_OPT_MAXPERMSIZE Web/Business Maximum permanent generation size for J2EE Web Application Server.

WEB_MEMORY_OPT_MIN
WEB_MEMORY_OPT_MIN Web/Business Minimum memory for J2EE Web Application Server.

The values for these settings will vary according to your site needs and the JVM vendor used at your site. The
following guidelines should be considered when changing these values:
» The additional java options supported by each JVM vendor is slightly different to take advantage of specific
platform requirements by the JVM. Refer to the JVM options documentation provided with your JVM. For
Oracle/Sun based JVM's refer to JVM HotSpot VM Options.
» Ensure any options specified are within the constraints and restrictions of the JVM. For example, setting invalid
values may result in failure or unexpected behavior.
» Do not specify the –Xms,
Xms -Xmx or –XX:Perm
XX:PermSize parameters as additional options as these are have dedicated
settings already.
The following common settings have been used by customers:

Configuration Setting Usage

-XX:+UseParallelGC Use Parallel Garbage Collection

-XX:+MaxFDLimit Bump the number of file descriptors to maximum (Solaris Only)

-XX:+UseGCOverheadLimit Use a policy that limits the proportion of the VM's time that is spent in
Garbage Collection before an OutOfMemory error is thrown.

-XX:+UseLargePages
XX:+UseLargePages Use large page memory. See Large Memory Pages for more details.

-XX:-
XX:-HeapDumpOnOutOfMemoryError Dump heap to file when java.lang.OutOfMemoryError is thrown.
Commonly used by Oracle Support if necessary.

XX:HeapDumpPath=<path and name>


-XX:HeapDumpPath= Location and name of dump file. Commonly used by Oracle Support if
necessary.

-XX:+PrintGC
XX:+PrintGC Print message when garbage collection occurs

62 - Technical Best Practices - Oracle Utilities Application Framework


Configuration Setting Usage

-XX:+UnlockCommercialFeatures Used with FlightRrecorder to enable Java Mission Control

-XX:+FlightRecorder Allow Flight Recorder to be used on this JVM for Java Mission Control

Note:
The Production Environment Configuration Guidelines (Doc Id: 1068958.1) whitepaper available from My
Oracle Support contains advice for settings for all versions of the Oracle Utilities Application Framework
based products.

Customizing the server context


In past versions of the Oracle Utilities Application Framework the URL used by the product was fixed on certain
platforms. The URL included the context spl or splapp depending on the platform and the J2EE Web Application
server used. In Oracle Utilities Application Framework V4 and above, it is now possible to specify a custom context
as part of the installation process. This allows the following URL to be used:

http://<host>:<port>/<server>/cis.jsp

as the default URL with the following settings:

<host> The hostname for the Web Application Server.

<port> The port number allocated to WL_PORT at installation time. To avoid the port number a
value of 80 may be specified. This value can only be specified once per Web
Application Server machine.

<server> The server context that can be set using WEB_CONTEXT_ROOT


WEB_CONTEXT_ROOT at installation time. This
value must be valid for the J2EE Web Application Server and is restricted to a single
value text value without any embedded blanks or special characters (such as the
directory character).

Clustering or Managed?
One of the decisions that must be made, when dealing with multiple web application servers, is to whether the
servers will be clustered or managed. The attributes of each style are outlined below:
» Clustered – A cluster is a group of servers running a Web application server simultaneously, appearing to the
users as if it were a single server (usually managed by a separate administration server). The advantages of
using a cluster are that you can manage the servers as a group and also the servers communicate to each other
to monitor availability. Clusters can load balance within themselves as they are in constant communication with
each other. The disadvantages are that there is an overhead in communication (usually each server uses
multicast to communicate to the other servers in a cluster) and each server must use a different IP address and
port number. This means clusters can only operate on one machine per server. The figure below summarizes a
cluster:

63 - Technical Best Practices - Oracle Utilities Application Framework


Figure 19 – Example clustered server architecture

» Managed – A managed set of Web Application servers that are independent of each other. They can be housed
on a single machine or multiple machines and can be housed on machines of differing size. The advantage of
managed servers is that each server can be targeted for specific user groups and can be managed
independently. There is no additional communication between the servers. A separate administration server can
manage the servers but that role can be taken by one of the managed servers if desired. The disadvantages are
that the load balancing software/hardware housed between the users and the managed servers performs the load
balancing and that deployment must be performed individually. The figure below summarizes managed servers:

Figure 20 – Example managed server architecture

There are no clear winners between clustering and managed Web Application Servers as the main factors in the
decision are:
» Amount of hardware – Clustering requires a hardware server per server . Sites where a small number of servers
are deployed cannot use clustering.
» Maintenance Effort – Clustering can reduce maintenance overhead if there are a large number of servers
involved. Managed servers require individual maintenance.
» Tolerance for multi-casting – Some sites ban multi-casting as it constitutes can be perceived as an
unacceptable overhead on the network. Deploying a private network between the servers can minimize this,
though this is more expensive.
» Flexibility – Many sites use managed due to its flexibility in routing particular traffic to particular servers. For
example, setting up specific servers for non-call center traffic (e.g. XAI, interfaces, depots).
Whether your site uses clustering or managed servers does not factor into high availability solutions as customers
have deployed high availability solutions using either technique.

Note:

64 - Technical Best Practices - Oracle Utilities Application Framework


For more information refer to Setting Up WebLogic Clusters for information on clustering.

Clustering and Environmental configuration settings


The configuration files used by the Oracle Utilities Application Framework specify a number of environmental
focused settings (e.g. hostnames, ports, file paths etc). These are used by the runtime of the Oracle Utilities
Application Framework to orientate to the correct environment. Given these environment settings are embedded in
the configuration files, there may be an impact on sites using clustering.

To support clustering with embedded environmental settings the following guidelines are recommended:
» Host Name settings – In a clustered environment the hostname used for any configuration setting should be the
cluster host or the load balancing proxy used for the cluster. To access a cluster, the users (or servers) need to
access a single URL; the host component of that URL should be used for any host name configuration settings.
» Custom Context – In Oracle Utilities Application Framework V4 and above, it is possible to support a custom
URL context for use with the product at installation time. In a clustered environment, the context should be
common and therefore the setting of this value should be the same across all nodes of a cluster.
» Port Numbers – As part of the URL used for the product, a port number can be explicitly used. In most sites, Port
80 is used for production as it does not need to be specified on the URL by users. In a clustered environment this
port should be common and therefore the setting of this value should be the same across all nodes of a cluster.
Most J2EE Application Server vendors insist that all nodes of a cluster have the same port number (but different
hostnames).
» File Locations - The product requires some knowledge of where environmental specific information is stored.
This information is then configured to inform the product where specific configuration files and important
directories are located. Installing the software in a common location or on the same location on each node can
help allow the file locations to support clustering.

Allocate port numbers appropriately


When installing a copy of product you need to allocate a number of port numbers for each environment. It is
recommended to allocate a previously unused range of ports per environment so avoid port conflicts.

The following table outlines all the port numbers required by product at installation time:

Port P/I Comments

BATCH_RMI_PORT I Default JMX Port for managing and monitoring Batch threadpool

BSN_JMX_RMI_PORT_PERFORMANCE I Default JMX port used for Business App Server Monitoring

BSN_RMIPORT I JVM Child process starting Port Number (COBOL products only)

COHERENCE_CLUSTER_PORT P Port used for Coherence Cluster (Multi-cast only). May be overridden in
configuration for Unicast.

DBPORT P Database Connection Port

I Port allocated to Oracle Service Bus interface (if available)


OSB_PORT_NUMBER

I Port allocated to Oracle SOA (if available)


SOA_PORT_NUMBER

WEB_JMX_RMI_PORT_PERFORMANCE I Default JMX port used for Web App Server Monitoring

WEB_WLPORT P Oracle WebLogic Web Server Port for online channel (HTTP)

65 - Technical Best Practices - Oracle Utilities Application Framework


Port P/I Comments

WEB_WLSSLPORT P Oracle WebLogic Web Server secure Port for online channel (HTTPS)

WLS_ADMIN_PORT I Oracle WebLogic Administration Port

Legend: P – Port allocated prior to installation of product, I – Port allocated during installation of product.

Prior to installation of product, the database and Web Application Server need to be installed and the ports allocated
to these components recorded and provided for the installation of the product (they are indicated with a P in the
table). Each vendor will have the port definitions stored in different places. Refer to the vendor documentation for
more information.

When allocating ports (indicated with an I in the table) during the installation the following advice may be useful:

» Pick the same port numbering scheme per environment to save time allocating ports. Some sites find using the
same last digits for the type of port is helpful. For example, having 4 allocated for BSN_RMIPORT
BSN_RMIPORT (6504, 7914,
9724, 22034… etc).
» BSN_RMIPORT denotes starting ports. The number indicates the start of the port range. The JVMCOUNT determine
the ports allocated. Ensure that there are free ports in the range starting from that port number.

Note:
BSN_RMIPORT
BSN_RMIPORT and JVMCOUNT only applies to products using COBOL support. These values are not
supported in Oracle Utilities Application Framework V4.3.x and above.

» Document the ports used in your documentation or services file for future reference.
» Do not allocated used ports as there will be port conflicts and potentially the applications will refuse to work.

Monitoring and Managing the Web Application Server using JMX


In Oracle Utilities Application Framework Version 4.0 it is possible to enable JMX performance statistics to allow
collection, management and monitoring of JVM information for the Web Application Server. For backward
compatibility, the JMX enabled facilities are disabled by default. To use this facility you must execute the
configureEnv utility with the –a option (Advanced Menu) and specify the following settings:

Setting Contents

JMX Enablement System Userid Userid used for logging onto JMX Mbeans

JMX Enablement System Password Password to be used for JMX Enablement System Userid

RMI Port for JMX Web Port number to allocate to the JMX for the Web Application Server

This information is added to the spl.properties file in the etc/conf/root/WEB-


etc/conf/root/WEB-INF/classes subdirectory
for the environment, for the Web Application Server. An example of the applicable settings is shown below:

spl.runtime.management.rmi.port=..
spl.runtime.management.connector.url.default=service:jmx:rmi:///jndi/rm
i://hostname:../oracle/ouaf/webAppConnector

jmx.remote.x.password.file=scripts/ouaf.jmx.password.file
jmx.remote.x.access.file=scripts/ouaf.jmx.access.file

66 - Technical Best Practices - Oracle Utilities Application Framework


ouaf.jmx.com.splwg.base.support.management.mbean.JVMInfo=enabled

ouaf.jmx.com.splwg.base.web.mbeans.FlushBean=enabled
The following settings are important to the JMX monitor:

» The spl.runtime.management.connector.url.default
spl.runtime.management.connector.url.default is the JMX url to be used in the JMX console or
JMX browser.
» The jmx.remote.x.password.file and jmx.remote.x.access.file are the default security setup for
the JMX. These are for basic security setup. For more information about the files and alternative security setups
refer to Monitoring and Management Using JMX Technology.
» The ouaf.jmx.* settings enable individual beans at startup time. These may be enabled at runtime.
Once the Web Application Server component is started; the JMX Mbeans defined in this configuration are started
and a JSR160 compliant JMX console or JMX browser can be used to connect to the JMX Mbeans. The remote
URL and credentials are provided as configured above.

Within the JMX console or JMX browser there are a number of specific facilities that are available:

» It is possible to manage the data within the Web Application Server cache from JMX. In past releases of Oracle
Utilities Application Framework this was possible using utility URLS's which required the IT group to logon to the
product to issue commands. This is still possible but can be replaced with JMX console commands. This is
controlled by the FlushBean Mbean.
» It is possible to get environmental information about the Web Application Server Java Virtual Machine (JVM) for
support purposes. . In past releases of Oracle Utilities Application Framework this was possible using utility
URLS's which required the IT group to logon to the product to issue commands. This is still possible but can be
replaced with JMX console commands. This is controlled by the JVMInfo Mbean.
» It is possible to get internal JVM information about the Web Application Server using the JVMSystem Mbean.
Mbean
This is an extension of the base Java MXBeans (Package java.lang.management). By default these are disabled
and can be seen by executing the enableJVMSystemBeans operation from the BaseMasterBean.
BaseMasterBean When
enabled the following additional areas can be monitored via JMX for the Web Application Server:
» Class Loading statistics
» Memory statistics
» Operating System statistics (statistics vary by platform).
» JVM Runtime information (additional to JVMInfo)
JVMInfo
» Thread statistics – Statistics on individual java threads.

Note:
No confirmation (i.e. Are You Sure?) dialog is provided with most JMX consoles or JMX browser so care
should be taken when issuing commands.

Enabling autodeployment for Oracle WebLogic console


Note:
The technique shown below applies to Oracle Utilities Application Framework V4.1 and above.
For other versions of the Oracle Utilities Application Framework custom templates or manual changes are
necessary from the Oracle WebLogic console. Refer to the Server Administration Guide for those
products for more information.

67 - Technical Best Practices - Oracle Utilities Application Framework


By default, Oracle WebLogic is deployed on demand, on first use, when using the default templates supplied by the
product. This behavior can be altered to autodeploy the console at startup to save the initial delays when first using
the console.

To autodeploy the console on startup add the following to the


%SPLEBASE%\
%SPLEBASE% templates\
\templates CM_config.xml.win.exit_3.include
\CM_config. xml.win.exit_3.include user exit file (for Windows) or
$SPLEBASE/templates/CM_config.xml.exit_3.include
$SPLEBASE/templates/CM_config.xml.exit_3.include user exit file (for Linux/Unix):

<internal-
<internal-apps-
apps-deploy-
deploy-on-
on-demand-
demand-enabled>false
enabled>false</internal
false</internal-
</internal-apps-
apps-deploy-
deploy-on-
on-demand-
demand-enabled>

Run the initialSetup[.sh]


initialSetup[.sh] utility to reflect the change. This configuration will be added to the Oracle WebLogic
configuration.

Password Management solution for Oracle WebLogic


One of the common requests for an enhancement is the ability for users to change their application passwords from
within the product. Typically password management is scoped outside the product's domain as it is considered
infrastructure. This does not mean the product need not provide the interface to change the password, but it is the
infrastructure's responsibility to provide a mechanism to change the passwords used in the security store.

The issue becomes then if the infrastructure provides such an interface for the product to hook into. There are a
number of patterns in this area:

» Customers implement an identity management solution to manage the passwords, expiry and rules. In this case
the implementation needs to interface to the identity management solution by calling the appropriate facilities in
the identity management solution around passwords. Of course, the J2EE Web Application Server used is then
interfaced into the identity management solution or the related security store to provide the authentication
mechanism.
» Customers link the security store for authentication directly to the security configuration of the J2EE Web
Application Server. In this case, the J2EE Web Application Server provides the interface to the password change
facility.
In the latter case, if you are a customer using Oracle WebLogic, there is an example JSP available under Password
Change Sample to allow an application to change the passwords, irrespective of the security used. This example
can be altered to suit your sites standards and linked to the product as a custom JSP via a Navigation key to link to
the appropriate menu.

Error configuring Oracle WebLogic credentials


When the product is installed with Oracle WebLogic, the security repository used by the environment is populated
with an initial Administration System userid (usually system or weblogic)
weblogic to be used to create other credentials
post installation. To use this user within Oracle WebLogic it must encrypted (along with the password) before it can
be used. The installer calls a java class within Oracle WebLogic to encrypt this userid and password, but if the path
to Oracle WebLogic is incorrect, specified in the WEB_SERVER_HOME (or WL_HOME17) parameter the installer will
return this error when attempting to encrypt the user:

…<crit> Error occured while running java -


Dweblogic.RootDirectory=…/splapp weblogic.security.Encrypt :
Output is Exception in thread "main" java.lang.NoClassDefFoundError:
weblogic/security/Encrypt
17 WEB_SERVER_HOME is used by Oracle Utilities Application Framework V4.x and above.

68 - Technical Best Practices - Oracle Utilities Application Framework


Caused by: java.lang.ClassNotFoundException: weblogic.security.Encrypt


Could not find the main class: weblogic.security.Encrypt. Program will
exit.
To fix this issue set the WEB_SERVER_HOME using the configureEnv[.sh] –i utility (or set WL_HOME)
WL_HOME to access
the appropriate security encryption classes.

Corrupted SPLApp.war
By default, the product installer uses archive mode for the product deployment (this is true for Oracle WebLogic and
IBM WebSphere – though in Oracle WebLogic expanded mode is also supported). When using archive mode the
product utilities build the product into a set of J2EE WAR and EAR files prior to deployment.

The WAR and EAR build is performed by the initialSetup[.sh] utility. Refer to the Server Administration
Guides for the product for a detailed description of the options and operations supported by this utility.

If, for any reason, the WAR or EAR files are not built completely, and are therefore are corrupted, then the product
start may abort. This can manifest in a number of error messages depending on the nature of the corruption:

… <info> ERROR: …/splapp/applications/SPLApp.war war file does not


exist. Problem with the environment. Exiting.
or

weblogic.management.DeploymentException: Unexpected end of ZLIB input


stream
at
weblogic.application.internal.EarDeploymentFactory.findOrCreateComponen
tMBeans(EarDeploymentFactory.java:189)

To resolve this issue then rerun the initialSetup[.sh] utility to recompile the WAR and EAR files.

Web Application Server Logs


In the Server Administration Guide for your product the product specific logs are outlined including the formats and
location. Given the product runs within a J2EE Web Application Server, that server also has a set of configuration
files that can be used for diagnostic information.

The table below outlines the default set of J2EE Web Application Server log files:

Log File Usage Oracle WebLogic Product Log

Server Log Server Messages. Log exists per server defined in domain. ■
Detailed error messages and product information is
contained in this log.

Domain Log Domain level messages. This log contains domain ■


messages and rolled up messages, at a particular level,
from servers in the domain.

69 - Technical Best Practices - Oracle Utilities Application Framework


Log File Usage Oracle WebLogic Product Log

HTTP Access Log (optional) HTTP resource log (aka Apache Log) ■

Audit Log (optional) Audit Provider Log ■

spl_web)
spl_web
Web Application Log (spl_web Captures product web application messages ■

Business Application Log Captures product business application server messages ■


spl_service)
spl_service
(spl_service

Inbound Web Services Captures product web services messages ■


spl_iws or spl_xai)
(spl_iws spl_xai

initialSetup)
initialSetup
Initial Setup (initialSetup Captures configuration generation utility messages ■

configureEnv)
configureEnv
Configuration (configureEnv Captures configuration utility messages ■

Refer to the Oracle WebLogic documentation and Server Administration Guide for details of the logs, location and
format.

Enabling additional Java options


It is possible to set specialist additional options on the java command that is used to run the component for
development or debug purposes. For Oracle Utilities Application Framework V4.0 and above, you can
achieve this by specify relevant options the values on the following settings:

Component Setting

Web/Business Application Server WEB_ADDITIONAL_OPT

Batch BATCH_MEMORY_ADDITIONAL_OPT

You specify the values as you would on the java command line as outlined by your Java vendor. For example, for
Oracle WebLogic/Oracle Java customers:
» It is possible to enable java debug (using jdb)
jdb to debug your java code using the –Xrunjdwp option.
» It is possible to enable verbose class loading into the log files using the –verbose java option.

Note:
The combination of java options that can be used must be valid for the JVM version and vendor used.

Note:
For customers on previous versions of the Oracle Utilities Application Framework, these setting must be
manually set in the scripts used to initiate the JVM. Please note, changes to any base scripts may be
overridden when initialSetup[.sh] is executed.

Using setUserOverrides.sh for Oracle WebLogic 12c


Note:

70 - Technical Best Practices - Oracle Utilities Application Framework


This feature only applies to UNIX/Linux installations using Oracle WebLogic that use native mode
installation.

In Oracle WebLogic 12c there is a feature where if the script setUserOverrides.sh exists in
$DOMAIN_HOME/bin then this script will be called at startup by nodeManager or the Administration server at
startup time. This script is user defined and is useful for the following situations:

» The SPLEBASE setting can be set in this script to implement native mode installation. This setting is used by the
Oracle Utilities Application Framework to allow configuration files to get used from the file system rather than
within the EAR/WAR file at runtime. For example:
SPLEBASE="/u01/utilities/product"
export SPLEBASE
» The java custom memory and additional parameters can be added to the server startup using the
USER_MEM_ARGS environment setting18. For example:

USER_MEM_ARGS="-Xms1024m -Xmx4096m -XX:CompileThreshold=8000 -


XX:PermSize=500m -
Djava.security.auth.login.config=/u01/utilities/product/splapp/config/
java.login.config -XX:+UnlockCommercialFeatures -XX:+FlightRecorder"
export USER_MEM_ARGS
» The Library LD_LIBRARY_PATH path can be set (if needed). For example:
LD_LIBRARY_PATH="/u01/utilities/product/runtime:$LD_LIBRARY_PATH"
export LD_LIBRARY_PATH
» The classpath using EXT_PRE_CLASSPATH can be manipulated that is sent to
EXT_PRE_CLASSPATH="…"
export EXT_PRE_CLASSPATH

Note:
If there are multiple servers on this domain ensure the script takes this into account using the SERVERNAME
variable.

Native vs Embedded Oracle WebLogic Mode


By default, when using Oracle WebLogic, the Oracle Utilities Application Framework based product is installed in
embedded mode. This means that the configuration files and files within the product are used directly by Oracle
WebLogic, including configuration files necessary for Oracle WebLogic itself (e.g. config.xml).
config.xml The installation
process generates the necessary configuration files and then using generated versions of Oracle WebLogic utilities
points the Oracle WebLogic runtime to the files in the product environment. The term embedded is used to describe
the fact that Oracle WebLogic uses files embedded in the product rather than its own files; it simply provides the
runtime for the environment.

This approach has advantages and disadvantages:

Advantages Disadvantages

18 If the server start parameters on Oracle WebLogic console are going to be used avoid setting this value.

71 - Technical Best Practices - Oracle Utilities Application Framework


Advantages Disadvantages

Simple and easy to implement configuration, ideal for Changes to domain configuration within Oracle WebLogic console
development and other non-production environments must be reflected in configuration files using user exits or custom
templates to retain changes across patches/upgrades.

One Oracle WebLogic installation can be shared across many Does not support clustering without complex manual changes to
environments on the same host configuration files

Default security setup Administration Server deployed with product

Common configuration change scenarios are handled by OEM does not recognize Oracle WebLogic targets without manual
configuration settings configuration of discovery

Utilities provided must be used to manage product

Limited changes to some features (such as domain)

This setup is ideal for development and other non-production environments where you need multiple copies of the
product on a single host but may not be appropriate for production environments where advanced security setup
and clustering are typically required.

The alternative is to install the product in what is termed, native mode. Typically Oracle WebLogic J2EE Web
Applications are deployed directly to Oracle WebLogic and managed that way. This has the advantage of gaining full
access to the Oracle WebLogic facilities like advanced configuration and more importantly the ability cluster the
product across multiple nodes. Oracle Utilities Application Framework V4.x and above, can be installed using this
mode with minor changes to the installation process. It is also possible to convert an embedded installation into a
native installation with minor changes, if migration to this mode is appropriate.

The native mode allows the product to have access to support using the features of Oracle WebLogic with fewer
configuration steps than embedded mode. The advantages and disadvantages of this mode are outlined in the table
below:

Advantages Disadvantages

Native Support for Clustering/Managed Servers Support for multiple environments per domain is limited at present
stage. Multiple WebLogic installs may be required if multiple
environments are on the same host

Changes to domain do not require manual changes to templates Requires some manual effort in setting up domain, servers and
security for environment

Administration Server can be separated from product servers

OEM natively recognizes Oracle WebLogic targets

Native facilities in Oracle WebLogic and/or Oracle Enterprise


Manager can be used to operate and monitor the product

Configuration and operational facilities of Oracle WebLogic can be


used (including documented variations)

The figure below illustrates the differences between the two modes:

72 - Technical Best Practices - Oracle Utilities Application Framework


• Product J2EE files
• Default WebLogic
Configuration
• Default WebLogic utilities
• Default Security Realm Reads • Runtime software used
• Admin console deployed OUAF Product Oracle WebLogic only
with product
• Non-clustered setup
• Automatic Deployment
• OEM Registration with
manual effort
• Complex User Exits Embedded

Native
• Console Based
Administration
Deployed • Standard Utilities
• Product J2EE files • Manual or Auto
OUAF Product Oracle WebLogic Deployment
• Cluster or Managed Server
Support
• Advanced Security
• Advanced Configuration
• Separate Administration
• Automatic OEM registration

Figure 21 – Native vs Embedded architecture

The two modes have different attributes and different approaches applicable to different situations. The following
recommendations should be consider when deciding which mode to use:
» It is not impractical use different modes for different environments. One mode will not usually satisfy all the needs
of all environments.
» It is recommended to use native mode for production implementations as it offers flexibility, cluster support,
separation of the Administration function and the ability to use the advanced configuration elements of Oracle
WebLogic as well as Oracle Enterprise Manager (if applicable).
» It is recommended to use native mode if each environment is housed in a separate virtual machine, which is
common in virtualized implementations. This will allow configuration at the virtual machine level to be used and
reduces maintenance efforts.
» It is recommended to use embedded mode if more than one copy of the product exists on the same virtual or non-
virtual host. The ability to share a common copy of Oracle WebLogic is reduces the maintenance efforts for
multiple environments.
» It is recommended to use embedded mode for development environments where java based development is
taking place. This setup supports the use of the expanded mode features of Oracle WebLogic used by the Oracle
Utilities SDK, which requires access to expanded directories for multi-user development.
For more information about native mode installation refer to the Native Installation for Oracle Utilities Application
Framework (Doc Id: 1544969.1) and Implementing Oracle ExaLogic and/or Oracle WebLogic Clustering (Doc Id:
1334558.1) available from My Oracle Support.

CLIENT-CERT Support
Note:
In Oracle Utilities Application Framework V4.2.0.2.0 and above, CLIENT-
CLIENT-CERT is supported from the
configureEnv[.sh]
configureEnv [.sh] utility directly.

One of the additional configuration options for the authentication of the product is to implement a Single Sign On
solution or implement client certificates. Whilst most of the configuration for these features is performed in the Single

73 - Technical Best Practices - Oracle Utilities Application Framework


Sign On product and/or J2EE Application Server, the Oracle Utilities Application Framework has to be configured to
use that facility.

In most cases to use these facilities the login configuration for the product has to be changed from FORM or BASIC
CLIENT-
to CLIENT -CERT.
CERT This informs the product that the credentials will be passed directly from the J2EE Application
Server (via the Single Sign On solution, security providers or via client certificates).

To make this change the following process must be performed:

» Logon to the machine that houses the environment to change as the product administrator.
» Take a copy of the web.xml.template to cm.web.xml.template in the same directory the original is
located in the templates subdirectory. This will inform the Oracle Utilities Application Framework to use this new
template instead of the base template.
» Edit the cm.web.xml.template file and replace the login-
login-config section with a section configuring the
CLIENT-CERT configuration. For example:
Replace:

<login-config>

<auth-method>@WEB_WLAUTHMETHOD@</auth-method>

<form-login-config>

<form-login-page>@WEB_FORM_LOGIN_PAGE@</form-login-page>

<form-error-page>@WEB_FORM_LOGIN_ERROR_PAGE@</form-error-page>

</form-login-config>

</login-config>

With:

<login-config>

<auth-method>CLIENT-CERT</auth-method>

</login-config>

Note:
For Oracle Utilities Application Framework V4.x customers this may need to be repeated for the templates
web.xml.appViewer.template) and online help (web.xml.help.template
web.xml.appViewer.template
for AppViewer (web.xml.appViewer.template web.xml.help.template)
web.xml.help.template if you
wish to include those components in the same solution.

» Ensure the environment is shutdown prior to implementing any changes.


» Execute the initialSetup utility to implement the changes and rebuild the EAR files.

Note:
As the web.xml file has been changed and EAR file rebuilt, customers using native mode will have to
redeploy the SPLWeb application to reflect the change.

» Optionally, changes can be verified by viewing the web.xml files generated under the etc\
etc\conf subdirectory of
the product installation.
» Restart the product.

74 - Technical Best Practices - Oracle Utilities Application Framework


The product is now configured to use CLIENT-
CLIENT-CERT.
CERT

Implementing Work Managers


Oracle Weblogic supports Work Managers which allow resource constraints to be allocated to servers. The main
use for Work Managers is to provide service assurance by preventing traffic overload of servers. By default, Oracle
WebLogic provides a global work manager with no limits.

It is possible to use Work Managers to constrain the traffic for online transactions and Web Services traffic for Oracle
Utilities Application Framework based products. Once a Work Manager is deployed Oracle WebLogic will track
traffic till the configured resource limit is reached. If the resource limit is reached, the server where the limit is
attached will refuse more traffic to assure existing traffic has enough resources to complete, till the limit is not
exceeded once more. To use Work Managers effectively, it is recommended that clustering or multiple managed
servers be used to maximize availability.

To implement Work Managers the following process should be performed

» If you are using Oracle WebLogic in native mode, perform the following:
» Define a Capacity Constraint19 for use the server. You can optionally deploy the constraint directly to the
product server definition or via a custom Work Manager for tracking.
» Define a Work Manager and associate the fore-mentioned Capacity Constraint with the Work Manager.
Deploy the Work Manager to the product server.
» If you are using Oracle WebLogic in embedded mode, the following steps should be implemented:
» If the product is using Oracle Utilities Application Framework V4.x and above, create a user exit file in the
templates directory with the name cm.config.xml.exit_2.include (or
cm.config.xml.win.exit_2.include for Windows) with the following contents and execute
initialSetup to include the changes in the configuration:

<self-tuning>

<capacity>
<name>RequestLimit</name>

<target>myserver</target>

<count>150</count>
</capacity>

<work-manager>

<name>MyWorkManager</name>
<target>myserver</target>

<capacity>RequestLimit</capacity>

<ignore-stuck-threads>false</ignore-stuck-threads>
</work-manager>

</self-tuning>

19 At the present time, the Capacity Constraint Work Manager Definition Type is the only supported constraint or request class.

75 - Technical Best Practices - Oracle Utilities Application Framework


Note:
The 150 connection limit is for illustration purposes only. Set the limit appropriate for your conditions.

Once implemented, Work Managers can be monitored from the Oracle WebLogic console.

Implementing Multiple Environments In A Single Domain


Note:
This technique can be applied to versions of the same product, multiple copies of the same product or a
mixture of different products.

By default, the Oracle Utilities applications are installed in embedded mode for Oracle WebLogic. Basically the
product reuses an existing Oracle WebLogic installation and points the WebLogic runtime installation to the Oracle
Utilities application runtime to run the product. It is called embedded as basically we are not using the Oracle
WebLogic installation to house the product, the product is using file embedded within the product to run Oracle
boot.properties, config.xml etc and command
WebLogic. For instance we generate the security setup, boot.properties
utilities to start/stop Oracle WebLogic and they are embedded within our product.

Whilst the embedded installation is ideal for most environments, as it is simple, it has a number of disadvantages:
» Advanced facilities such as clustering and high availability cannot be easily implemented in embedded mode.
» Most of the configuration is defaulted such as the domain name and server names.
» The administration server is automatically included in each environment.
» You need to use text file based user exits to augment the embedded configuration for advanced configurations.
This requires manual efforts to maintain XML files in some cases.
» To offer an alternative to the embedded installation, the ability to use a native installation method which houses
the product inside Oracle WebLogic was introduced. This allows the site to take full advantage of Oracle
WebLogic features and also manage the configuration from the Oracle WebLogic console or Oracle Enterprise
Manager. For details of the features of the Native installation refer to Native Installation for Oracle Utilities
Application Framework (Doc Id: 1544969.1) whitepapers available from My Oracle Support.
One of the interesting abilities that is possible when using native mode is that it is possible to run multiple products
or environment within the same domain. Basically this means you can reduce the number of administration consoles
to manage your environment.

To use this facility the following process should be used:


» Install Oracle WebLogic as per the Oracle WebLogic Installation documentation and Native Installation for Oracle
Utilities Application Framework (Doc Id: 1544969.1) whitepapers available from My Oracle Support.
» Create a domain with an administration server using the Configuration Wizard shipped with Oracle WebLogic.
Logon to the administration console with the user you specified when you created the domain.
» Within the console create individual servers20 for each product or environment you want to house the products.
You should use machines with Node manager as well to allow for expansion and remote management if
necessary. With native mode, the administration console does not have to be on the same machine as the target
environments. Ensure each server is broadcasting on a different port.
» Install the products as outlined the in the Native Installation for Oracle Utilities Application Framework (Doc Id:
1544969.1) whitepapers available from My Oracle Support and the product installation documentation with the
additional advice:

20 Name the servers appropriately according to your site standards.

76 - Technical Best Practices - Oracle Utilities Application Framework


» Individual deployments in Oracle WebLogic need to be unique across a domain. By default, the product
creates a common set of names for each component. It is necessary to change these names during the
installation to avoid confusion in deployment. It is suggested to add an environment or product identifier as a
prefix or suffix to make the deployment unique. There are two settings that need to change:

Setting Current default Example

Business Server Application Name SPLService SPLServiceTEST1

Web Server Application Name SPLWeb SPLWebTEST1

» Ensure the deployment name is unique for every single deployment (even across products/environments).
For example, if you ran a TEST environment and UAT environment on the same domain. I setup
SPLServiceTEST and SPLWebTEST for TEST deployments and SPLServiceUAT and SPLWebUAT for my
UAT environment. These are just examples.
» Ensure the paths in the Server Setup for the individual servers point to the classes in the relevant
environment installations. Ensure the SPLEBASE is set correctly in the server setup.
» Ensure the port numbers allocated to the Servers match the port numbers you specified in the product
installation for each server.
» The most important part of this is that you must alter the setDomain utility within the domain to set the
SPLEBASE variable appropriately for each SERVERNAME.
SERVERNAME If you forget this, the product may not startup
correctly. In my example:
if [ $SERVERNAME$ = 'ouaf22server']
then

...

set SPLEBASE=/oracle/FW22

fi
» Deploy the deployments to the relevant server using the Oracle WebLogic console or WLST
WLST. To save time, deploy
the SPLServiceXXX deployment first and then the SPLWebXXX deployment as per the Native Installation Oracle
Utilities Application Framework (Doc Id: 1544969.1) whitepaper available from My Oracle Support.
» Start/stop the servers using the Administration console. Do not use spl[.sh]
spl[.sh] as you are operating in native
mode. All base Oracle WebLogic utilities can also be used such as WLST etc.
To ensure optimal use of the domain a few considerations should be taken into account:
» All servers on this domain share the same authentication security setup.
» By default, all the J2EE resources are controlled by a common role/credential (typically cisusers).
cisusers If you want to
separate the servers using different roles/credentials then you need to change the cisusers setting using the
configureEnv[.sh]
configureEnv[.sh] -a settings for the Web Security Role/Web Principal Name/Application Viewer Security
Role/Application Viewer Principal Name to an appropriate setting for each product/environment.
» When using native mode, any changes to the EAR files need redeployment (it is an update deployment which is
far quicker). You can use the autodeploy features of Oracle WebLogic to minimize this effort21. If you ever run
initialSetup[.sh]
initialSetup[.sh] an update redeployment is required.

21 Additional CPU usage is encountered when autodeployment is used as Oracle WebLogic regularly checks for updates.

77 - Technical Best Practices - Oracle Utilities Application Framework


» Any changes to properties files may not necessarily require redeployment at runtime as setting the SPLEBASE
uses the versions stored in the etc/conf directory. If you want to keep the EAR versions in synchronization then
running redeployment is necessary after running initialSetup[.sh]
» Embedded installations can be converted to this facility and retain the embedded installation as a fallback. The
embedded installation and native installation cannot be running at the same time as they share port numbers.
This is outlined in the Native Installation for Oracle Utilities Application Framework (Doc Id: 1544969.1)
whitepapers available from My Oracle Support.
Once this is done you can manage the deployments from the console including security and monitoring.

Note:
Customers using Oracle Enterprise Manager to manage the products or Oracle WebLogic will not
necessarily need to use this facility as the Oracle Enterprise Manager already serves this process.

Business Application Server Best Practices


The Business Application Server is used by product to process the business logic. Whilst most of the advice for the
Web Application Server can be reused with the Business Application Server there are a number of practices and
general advice that is specific to this tier in the architecture.

Cache Management
One of the features of the Oracle Utilities Application Framework is the implementation of a level 2 cache within the
architecture to provide performance benefits for commonly used configuration information. Generally the cache is
managed by the Oracle Utilities Application Framework automatically with little or no interaction from operators. By
default, the cache is reloaded as needed or every eight (8) hours, whichever occurs first. Some elements of the
cache such as security information is refreshed on a more frequent basis (every 30 minutes).

There are a number of cache management utilities to manually cause all or parts of the cache to refresh manually.
These utilities are documented in the Server Administration Guide for your product.

While these utilities are rarely used in production, they can be used, by appropriately authorized personnel to make
sure the cache contains the correct information. Typically the manual refresh is required if the configuration data is
changed and needs to be reflected as soon as possible.

Using JMX with the Business Application Server


In Oracle Utilities Application Framework Version 4.0 it is possible to enable JMX performance statistics to allow
collection, management and monitoring of JVM information for the Business Application Server. For backward
compatibility, the JMX enabled facilities are disabled by default. To use this facility you must execute the
configureEnv utility with the –a option (Advanced Menu) and specify the following settings:

Setting Contents

JMX Enablement System Userid Userid used for logging onto JMX Mbeans

JMX Enablement System Password Password to be used for JMX Enablement System Userid

RMI Port for JMX Business Port number to allocate to the JMX for the Business Application Server

This information is added to the spl.properties file in the etc/conf/service


etc/conf/service subdirectory for the
environment, for the Business Application Server. An example of the applicable settings is shown below:

78 - Technical Best Practices - Oracle Utilities Application Framework


spl.runtime.management.rmi.port=…
spl.runtime.management.connector.url.default=service:jmx:rmi:///j
ndi/rmi://host:…/oracle/ouaf/ejbAppConnector
ouaf.jmx.com.splwg.ejb.service.management.PerformanceStatistics=e
nabled
jmx.remote.x.password.file=scripts/ouaf.jmx.password.file
jmx.remote.x.access.file=scripts/ouaf.jmx.access.file
The following settings are important to the JMX monitor:

» The spl.runtime.management.connector.url.default is the JMX url to be used in the JMX console or


JMX browser.
» The jmx.remote.x.password.file and jmx.remote.x.access.file are the default security setup for
the JMX. These are for basic security setup. For more information about the files and alternative security setups
refer to Monitoring and Management Using JMX Technology.
» The ouaf.jmx.* settings enable individual beans at startup time. These may be enabled at runtime.
» Once the Business Application Server component is started; the JMX Mbeans defined in this configuration are
started and a JSR160 compliant JMX console or JMX browser can be used to connect to the JMX Mbeans. The
remote URL and credentials are provided as configured above.
» The only Mbean available with the Business Application Server is the PerformanceStatistics Mbean. This
Mbean collects object performance data for analysis. For customer familiar with the Oracle Tuxedo product, this
facility is similar to the txrpt facility available for performance analysis.
» The statistics are collected by the Mbean from the time the Mbean is enabled until the environment statistics are
reset. By default, the Mbean is enabled at startup time but may be disabled (or re-enabled) at any time using the
disableMbean or enableMbean operations from the PerformanceMbeanController Mbean.
» When using this Mbean there are a few recommendations:
» The completeExecutionDump operation returns a CSV of the performance statistics of individual application
services to the JMX console or JMX browser. This represents the current state of the statistics at that time.
» The reset operation resets the statistics within the Mbean to start collection. This operation is handy to ensure
performance over a selected period.
There are other operations and attributes that return individual value information that may of interest. Refer to the
Server Administration Guide provided with your product for a detailed description of what statistics are available.

Note:
No confirmation (i.e. Are You Sure?) dialog is provided with most JMX consoles or JMX browser so care
should be taken when issuing commands.

Replicating the txrpt statistics


One of the features customers of past releases of V1.x of Oracle Utilities Customer Care And Billing used to use to
gather performance data was the txrpt facility within Oracle Tuxedo. The utility would take performance data
gathered from every service call and produce summary statistics per hour. The statistics were the number of calls
and the average response time for each defined service. The txrpt utility collected the statistics from log files that
were enabled in the Oracle Tuxedo configuration. This information was useful in tracking the performance of
individual services within the product against a sites's SLA.

With the advent of Oracle Utilities Application Framework V2.x and the removal of Oracle Tuxedo from the
architecture meant that this information was not available for collection as easily as originally. In Oracle Utilities

79 - Technical Best Practices - Oracle Utilities Application Framework


Application Framework the implementation of the PerformanceStatistics Mbean allows for collection of
txrpt. To achieve the same results as txrpt the following should be
performance information in a similar fashion txrpt
performed:

» On the hour boundary the completeExecutionDump operation must be executed by your JMX console or JMX
browser to extract and save the CSV information to a file. The file should have the date and time of the collection
for reference reasons.
» After collection of the statistics has been completed, the reset operation should be executed from your JMX
console or JMX browser.
The information in the files can be collated according to the desired analysis required by your site to summarize the
information. The CSV can be loaded into a database for analysis or into your sites preferred spreadsheet or analysis
tool. Remember that the date and time of the collection is not recorded in the data only the data itself.

Note:
While this process can be manually done using a JMX console such as jconsole,
jconsole it is recommended
that the JMX console or JMX browser automate the collection of the process in the background. Refer to
the documentation of the JMX console and JMX browser to configure your console or browser to achieve
this.

This facility is flexible for a number of reasons:


» The time period for collection is not limited to hourly as txrpt was. The time collection period can be increased
or decreased according to your site standards. For example, you might want to collect the data every 10 minutes.
» The statistics are live and can be queried regardless of the collection process.
» The level of information is higher than the original txrpt.
txrpt The following additional information is collected and
summarized:
» The data is now also summarized by the type of transaction that is performed. This will allow the site to assess
the performance of reads, updates, deletes, inserts etc separately.
» The last transaction recorded is detailed including the user. This information is useful for checking against other
statistics to assess where the performance is at the present moment.
» Statistics are already calculated by the utility prior to analysis. The txrpt utility only collected the average. This
facility collects the average, minimum (best case) and maximum (worst case) performance statistics in the
collection period.

80 - Technical Best Practices - Oracle Utilities Application Framework


Database Connection Management
Hibernate and UCP are used to provide a pool of connections to the database for the various components of the
product. A separate pool exists for online, IWS and background processes. The size of the pool can be set in the
hibernate.properties.
hibernate.properties

The size of the pool can vary from mode of component to component with the following guidelines:

» The minimum pool size of the product should be set to the average number of connections needed for the mode
of access. By default it is set to one (1) which is sufficient for non-production, but for each new connection
required for the traffic the database connection needs to be established prior to use. The establishment of an
individual database connection can cause delays to the transaction using the connection as it waits for the
connection to be established. This negates the benefit of pooling connections. Track the number of connections
used at normal traffic load and specify that as the minimum. This will establish the connections at startup time and
avoid the overhead of creating connections on the fly. Ideally you want to avoid creating connections on the fly
unnecessarily.
» The maximum pool value should be set to cover any peak load you may experience. Initially the values can be
artificially inflated but after monitoring the number of connections open at peak times can optimize the value.
» The total number of database connections from all pools connecting to an individual database should not exceed
the number of configured users/connections for that database. Exceeding the number of configuration users can
cause database connection failures and delays in transactions.
Typically customers have indicated that a good rule of thumb to use is that at any time one third of the defined users
are active for normal traffic and two thirds are active at peak.

Note:
This is a rule of thumb and may NOT apply to the traffic patterns at your site. It is recommended to start
with an agreed value and then monitor to optimize the values as necessary.

Refer to the Server Administration Guide for your product for additional advice on this facility.

XPath Memory Management


Note:
This facility is available for Oracle Utilities Application Framework V4.1 and above. For Oracle Utilities
Application Framework 4.1 install patch 12357553.

With the popularity of the Configuration Tools facility within the product for customer extension the increase load of
XPath may cause memory issues under particular user transaction conditions (in particular high volume patterns).
As with most technology in the Oracle Utilities Application Framework, the XPath statements used in the
Configuration Tools are cached for improved performance. Increased load on the cache may cause memory issues
at higher volumes.

To minimize this the Oracle Utilities Application Framework has introduced two new settings in the
spl.properties file for the Business Application Server, where the dimensions of the XPath statement cache are
defined. These settings allow the site to optimize the control the XPath cache to support caching of commonly used
XPath statements but allowing for optimal specification of the cache size (to help prevent memory issues).

The settings are shown in the table below:

81 - Technical Best Practices - Oracle Utilities Application Framework


Setting Usage

com.oracle.XPath.LRUSize Maximum number of XPath queries to hold in cache across all threads. A zero (0 0)
value indicates no caching, minus one (- -1) value indicates unlimited or other
positive values indicate number of queries stored in cache. Cache is managed on
22
a Least Reused basis. For memory requirements, assume approximately 7k per
query). The default in the template is 2000 queries.

com.oracle.XPath.flushTimeout The time, in seconds, when the cache is automatically cleared. A zero (0) value
indicates never auto-flush cache and a positive value indicates the number of
seconds. The default in the template is 86400 seconds (24 hours).

Note:
The templates provided with the product have these settings commented out. To use the settings
uncomment the entries in the generated configuration files.

In most cases the defaults are sufficient but can be altered if the following is guidelines are:
» If there are memory issues (e.g. out of memory) then decreasing the LRUSize or decreasing the flushTimeout
may result in a reduction in memory issues. LRUSize has a greater impact on memory than flushTimeout.
flushTimeout
» If decreasing value the value of the LRUsize causes performance issues, consider changing the flushTimeout
initially only and ascertain if that works for your site.
There are no strict guidelines on the value for both parameters as cache performance is subject to the user traffic
profile and the amount and types of XPath queries executed. Experimentation will assist in determining the right mix
of both settings for your site.

Enabling Service Timings


Note:
This facility is designed for development use only and is not recommended for production use. It is
recommended to use the JMX facility for production if tracking facilities are required for production.

If the Performance Statistics JMX call is not valid for your site it is also possible to configure log4j to display service
spl_service.log. This is possible by adding the following line to the
performance information in the spl_service.log
$SPLEBASE/splapp/businessapp/properties/log4j.properties or
$SPLEBASE/etc/conf/service/log4j.properties
$SPLEBASE/etc/conf/service/log4j.properties file by adding the following line:

log4j.logger.com.splwg.base.api.service.ServiceDispatcher=debug
Once this is set the debug messages will written to the $SPLEBASE/logs/system/spl_service.log in with the
message type api.service.ServiceDispatcher type with a Start and End message. Both the Start and End
message outline the service name called, execution mode and the End message includes the timing for the service
in ms. For example:

22 In laymans terms, older cached entries that are not reused are removed from the cache automatically to make roon for more used entries or new
entries.

82 - Technical Best Practices - Oracle Utilities Application Framework


USER1 weblogic.kernel.Default (self-tuning)'] DEBUG
(api.service.ServiceDispatcher) Start 'FWLEFKRP
FWLEFKRP read

USER1 - 682001-206-1 2011-12-09 16:00:37,932 [[ACTIVE] ExecuteThread:


'1' for queue: 'weblogic.kernel.Default (self-tuning)'] DEBUG
(api.service.ServiceDispatcher) End 'FWLEFKRP
FWLEFKRP read, time 18.430 ms

..

Oracle WebLogic Datasource Support


Note:
The instructions below apply to multiple versions of Oracle Utilities Application Framework. Use the
appropriate instructions for the appropriate version of Oracle Utilities Application Framework used.

Note:
For background on the JDBC Datasource Support within Oracle WebLogic, refer to WebLogic Server
Data Sources.

Note:
For tuning advice for JDBC Datasources refer to Tuning Data Source Connection Pools.

By default, the online (and XAI) components of the Oracle Utilities Application Framework use Universal Connection
Pool for connection pooling. If you wish to use Oracle WebLogic JDBC based connection pooling for integration to
other products or to use the GridLink features of Oracle WebLogic, then Oracle Utilities Application Framework
needs to be configured to use Oracle WebLogic Data Sources.

The advantages of using data sources are:


» Maintenance of the pool characteristics – It is possible to maintain the pool tolerances and connection
information from the Oracle WebLogic console or Oracle Enterprise Manager. In some cases these tolerances
can be changed without an outage to the product. The figure below illustrates the typical pool characteristics
managed from Oracle WebLogic:

83 - Technical Best Practices - Oracle Utilities Application Framework


Figure 22 – Example Pool tolerance configuration

» Advanced Monitoring – The pool management capabilities of Oracle WebLogic includes a set of statistics that
are calculated for the pooling that can be tracked to ascertain the health of the pool. These statistics can be
obtained using JMX, via the Oracle WebLogic console or via Oracle Enterprise Manager. The figure below
illustrates the capability to display the statistics to the Oracle WebLogic console:

84 - Technical Best Practices - Oracle Utilities Application Framework


Figure 23 – Example Pool statistics

» GridLink Support – Using the Oracle WebLogic Data Sources allows the ability to use GridLink based data
sources that provides easier configuration for failover and RAC configuration. Refer to the Oracle WebLogic
GridLink documentation for a discussion of this feature set.
» Diagnostic analysis – The Oracle WebLogic Data Sources have advanced diagnostic capabilities to monitor and
detect database connectivity and for resource profiling. This information is available from the Oracle WebLogic
console, JMX and Oracle Enterprise Manager. Refer to the Monitoring documentation for Oracle WebLogic for a
discussion of the facilities.
The Oracle Utilities Application Framework can be configured to use Data Sources using the following process:

» Define the JDBC datasource to the product database, using the Oracle WebLogic console, with the following
attributes:
jdbc. For example,
» The JNDI Name for the datasource should contain a directory name such as jdbc
jdbc/demodb.
jdbc/demodb The name of the JNDI should reflect your specific requirements.
» Ensure that Global Transaction Support is disabled on the JDBC connection. This is not appropriate for this
integration.
» The XA JDBC driver may be used but the product does not take advantage of the XA feature set.
» The Instance or Service driver may be used but if there is no site preference use the Service driver.
» Unless otherwise stated, the Statement Cache Algorithm should be set to the default setting of LRU.
» The Database User and Password to use should be any database user with read/write access to the product
such as CISUSER or SPLUSER.
SPLUSER The value can correspond to the value of the DBUSER configuration variable
in the ENVIRON.INI,
ENVIRON.INI if no site preference exists.

Note:
For Oracle Utilities Application Framework 4.3 and above, this manual process described below is not
required. The installation asks for the JDBC_NAME as part of the installation which is the fully qualified
JNDI name for the data source.

85 - Technical Best Practices - Oracle Utilities Application Framework


» If your site is using Oracle WebLogic in embedded mode the above change will change the Oracle WebLogic
config.xml23 to add the JDBC resource. To retain changes over upgrades and patches do the following:
» For products using Oracle Utilities Application Framework V2.x, you should copy the
config.xml.template (or config.xml.win.template for Windows) in the etc directory and create a
cm version (add cm. as the name prefix) and add the following lines:

<jdbc-system-resource>

<name>demo
demodb
demodb</name>
db

<target>myserver
myserver</target>
myserver

<descriptor-file-name>jdbc/
jdbc/demo
jdbc/demodb
demodb-
db-jdbc.xml</descriptor-file-name>
jdbc.xml

</jdbc-system-resource>

<internal-apps-deploy-on-demand-enabled>false
false</internal-apps-deploy-on-
false
demand-enabled>
Figure 24 – JDBC fragment for config.xml

Note:
name, target and descriptor-
The values for name descriptor-file-
file-name tags should be altered to suit your JDBC
internal-
connection. The internal apps-
-apps deploy-
-deploy on-
-on demand-
-demand -enabled is not required but is included for
reference purposes.

» For products using Oracle Utilities Application Framework V4.x then the above XML code should be placed
in the CM_config.xml.exit_3.include (or CM_config.xml.exit_3.include for Windows) in the
templates directory.
» To use the new JDBC datasource for online the following must be performed:
» Copy the hibernate.properties.web.tempate to cm.hibernate.properties.web.tempate
within the relevant directory. For Oracle Utilities Application Framework V2.x the directory is the etc
subdirectory, and Oracle Utilities Application Framework V4.x the directory is the template subdirectory. This
overrides the base template with the custom template.
Add the following lines to the top of the cm template file:

hibernate.connection.datasource = <jdbc_jndi>
hibernate.connection.username = <jndi_user>

hibernate.connection.password = <jndi_user_password>
where :

<jdbc_jndi> Full JNDI name for the JDBC connection

<jndi_user>24 A user to access the JNDI (Substitution variable


@WEB_WLSYSUSER@ can be used in the template).

<jndi_user_password> The password for the user to access the JNDI (Substitution
variable @WEB_WLSYSPASS@ can be used in the template).

23 A technique is to take a copy of the config.xml that is generated from the console changes to see where the changes need to be in the template.
24 This is NOT the database user and password it is the userid and password used to obtain the connection information from the JNDI. This user does
not have to be a user of the product, just have access to the server definitions of the product.

86 - Technical Best Practices - Oracle Utilities Application Framework


Remove the following lines from the cm template file depending on the version of the Oracle Utilities Application
Framework:

Version What to remove

All hibernate.connection.url
hibernate.connection.url entries including any ifdef statements
All hibernate.ucp.* entries

» Change the hibernate.transaction.factory_class to


org.hibernate.transaction.JDBCTransactionFactory
» Change the hibernate.connection.provider_class to
org.hibernate.connection.DatasourceConnectionProvider
org.hibernate.connection.DatasourceConnectionProvider

Note:
For later versions of Hibernate use the
org.hibernate.service.jdbc.connections.internal.DatasourceConnectionProviderImpl
class for hibernate.connection.provider_class.
hibernate.connection.provider_class

» Save the file. For Oracle Utilities Application Framework V4.2 and above the sample cm template is shown below:
hibernate.connection.driver_class = @DBDRIVER@

hibernate.connection.datasource = jdbc/ouafdb

hibernate.connection.username = @WEB_WLSYSUSER@

hibernate.connection.password = @WEB_WLSYSPASS@

hibernate.dialect = @DIALECT@

hibernate.show_sql = false

hibernate.max_fetch_depth = 2

hibernate.transaction.factory_class =
org.hibernate.transaction.JDBCTransactionFactory

hibernate.jdbc.fetch_size = 100

hibernate.jdbc.batch_size = 30

hibernate.query.factory_class=org.hibernate.hql.internal.classic.
ClassicQueryTranslatorFactory

hibernate.cache.use_second_level_cache = false

hibernate.query.substitutions = true 'Y', false 'N'

hibernate.connection.provider_class=org.hibernate.connection.Data
sourceConnectionProvider

hibernate.connection.release_mode=on_close

#ouaf_user_exit hibernate.properties.exit.include

87 - Technical Best Practices - Oracle Utilities Application Framework


#ouaf_user_exit hibernate.properties.web.exit.include
» Execute the initialSetup[.sh] command to apply the changes and new templates.

Note:
It is possible to execute the initialSetup[.sh] –t command to apply the changes. This avoids an
EAR rebuild.

» Restart the server to verify the connection is active.

Database Best Practices


The Database Server is responsible for the storage and management of data. There are a number of practices that
sites find useful for maintaining the health of the Database Server.

Regularly Calculate Database Statistics


Database statistics are important for the performance of all SQL in the product. Keeping them up to date ensures
the database has the most up to date information to make the appropriate access path decisions.

When any table in the system grows (or shrinks) by a larger than normal rate, the access paths to that table may
change causing inefficiencies. For the database to make the correct decision, it uses a set of statistics to assess all
available paths. This is an important factor in performance. It is therefore recommended that database statistics be
recalculated, using dbmsstats,
dbmsstats on a regular basis to maintain up to date statistics.

The frequency will depend on the volume and size of your database. It is recommended that statistics most tables
be calculated once a week at minimum unless their growth factors do not affect the path chosen by the DBMS.

Note:
CISADM is used as an example in the guidelines below. If your site uses another schema owner, then
substitute that owner in the examples below.

The following guidelines can be used to assist:

» It is possible to check what is the Last Analyzed Date on product tables are current (or not) by running the
following SQL.
SELECT table_name, last_analyzed FROM dba_tables WHERE owner =
'CISADM';
» It is possible to check check what is the Last Analyzed Date on indexes are current (or not) by running the
following SQL.
SELECT index_name, last_analyzed FROM dba_indexes WHERE owner =
'CISADM';
» If the Indexes are older by a week or more than consider gathering Statistics on them. You can also use the
below SQL which tells approximate number of INSERTs,
INSERT UPDATEs,
UPDATE and DELETEs
DELETE for that table, as well as
whether the table has been truncated, since the last time statistics were gathered.
SELECT * FROM USER_TAB_MODIFICATIONS;

Note:
The MONITORING attribute must be set on individual objects to use this facility.

88 - Technical Best Practices - Oracle Utilities Application Framework


» It is recommended to gather statistics while no active purging activities are occurring on the database.
» It is recommended to use the dbms_stats package for collecting statistics. An estimate percentage of 10
percent is generally sufficient. Set the degree parameter to a higher level to enable parallel collection of statistics.
It is suggested to set the block_sample parameter to false.
false
» The Method option while gathering statistics on tables should be set to FOR ALL COLUMNS SIZE AUTO. AUTO This
will make sure that Oracle automatically determines which columns require histograms and the number of buckets
(size) of each histogram.
» Gathering statistics separately for indexes is generally faster than the cascade=true option while gathering
table statistics.
» It is recommended to not collect statistics on all the tables at a single batch run at a single point of time. Dividing
the tables into multiple groups and then executing statistics calculation for each group at different time frames will
minimize any disruption due to statistics calculation.
» Depending on the stability of the query performance, it is suggested that the statistics collection frequency can be
altered to maintain query performance.
A discussion on the statistics calculation is also discussed in Performance Troubleshooting Guideline Series (Doc
Id: 560382.1) whitepapers available from My Oracle Support.

Ensure I/O is spread evenly across available devices


Ensuring I/O across devices is becoming less of a problem with progressive versions of the database handling this
automatically and the introduction of intelligent SAN technology.

One of the key practices that are key to performance of a database is the elimination of hot spots in the disk
architecture by ensuring that I/O is spread across all available devices. This is known as the Database Topology.
For example, placing the database physical files on a single disk is not optimal as multiple concurrent requests
queue to use the disk and would result in higher than expected disk wait times. By spreading the load across disks,
the opportunity for wait times is minimized and increases throughput. It is therefore recommended that the disk
architecture be designed for the physical database files so that as much I/O as possible is spread across all disks.

A discussion on the database topology and its implications is outlined in Performance Troubleshooting Guideline
Series (Doc Id: 560382.1) whitepapers available from My Oracle Support.

Use the Correct NLS settings (Oracle)


One of the configuration settings that can affect the sorting and processing of date data is setting the correct
language for the connection to the database. For Oracle customers it is recommended that the NLS setting is
correct for your region is set correctly at installation time. Refer to the NLS documentation (Globalization Support
Guide) for Oracle for details of the valid settings for your region.

Note:
Additionally for UTF8 customers ensure that the spl.runtime.cobol.encoding in the
spl.properties file, is set correctly to display the correct character set.

Note:
To ensure sorting and processing are correct for the desired character set ensure the appropriate NLS
initialization settings are set to the correct values. For more information refer to the Globalization Support
Guide component of the Oracle Database documentation.

89 - Technical Best Practices - Oracle Utilities Application Framework


Monitoring database connections
In Oracle Utilities Application Framework V4.0 and above, it is possible to use a different user per access
method (online, batch, etc) it is limited to a single user per access method. This can be limiting when trying to track
individual sessions at the database level as the connections can be difficult to distinguish.

It is possible to track individual connections using two attributes of the v$session


v$session system view:
» CLIENT_IDENTIFIER – In Oracle Utilities Application Framework V4.0 and above, the application user
used for the duration of the transaction is now placed in the CLIENT_IDENTIFIER for the duration of the
transaction using the connection. For compatibility purposes, the short userid is placed in this column (not the
Login Id25). If the connection is idle, the column is blank.
» MODULE – In Oracle Utilities Application Framework V4.0 and above, the module that is executing and
using the database connection is now populated in the MODULE field of v$session.
v$session If the connection is active,
the MODULE will contain the text TUGBU Idle to denote it as an idle connection using by the product. The value of
this will vary according to the object type:

Object Type Value of MODULE

Application Service Application Service Name is displayed. This can be translated using the CI_MD_SVC_L table where
SVC_NAME is the service name in MODULE and DESCR is the description of the service.

Business Object Business Object Code is displayed. This can be translated using the F1_
F1_BUS_OBJ_L
BUS_OBJ_L table where
BUS_OBJ_CD is the Business Object name in MODULE and DESCR is the description of the Business
Object.

Business Service Business Service Code is displayed. This can be translated using the F1_
F1_BUS_SVC_L
BUS_SVC_L table where
BUS_SVC_CD is the Business service code in MODULE and DESCR is the description of the Business
Service.

Service Script Script Code is displayed. This can be translated using the CI_SCR
CI_SCR_L
SCR_L table where SCR_
SCR_CD is the script
code in MODULE and DESCR is the description of the script.

Note:
If more than one language pack is installed on the product then LANGUAGE_CD must be populated to
return the description in the desired language.

Note:
To use the MODULE feature the hibernate.connection.release_mode must be set to on_close in
the hibernate.properties file. This is the default for Oracle Utilities Application Framework V4.2
and above, earlier releases require manual changes to configuration files.

» ACTION – In Oracle Utilities Application Framework V4.2 and above, the transaction type that is
requested for the MODULE is now populated in the ACTION field of v$session.
v$session If the connection is idle, the
column is blank. The table below lists the valid action values populated:

25 Due to size limitations of relevant field (64 characters) within v$session.

90 - Technical Best Practices - Oracle Utilities Application Framework


Setting Contents

ADD Service is attempting adding a new instance of an object to the system

CHANGE Service is attempting changes to an existing object in the system.

DEFAULT_ITEM Service is resetting its values to defaults. For example, by pressing the Clear button on the product UI
toolbar

DELETE Service is attempting to delete an existing object

EXECUTE_BO Service is a business object and is executing (Business Objects only)

EXECUTE_BS Service is a business service and is executing (Business Services only)

EXECUTE_LIST Service is a list based service and is executing (List Services only)

EXECUTE_SEARCH Service is a search based service and is executing (Search Services only)

EXECUTE_SS Service is a service script (including BPA scripts) and is executing (BPA and Service Scripts only)

READ Service is attempting to retrieve an object from the system

READ_SYSTEM Service is a common Oracle Utilities Application Framework based service that is executing

VALIDATE Service is issuing a validation action

» CLIENT_INFO - In Oracle Utilities Application Framework V4.2 and above, the contents of the Database
Tag characteristic type (up to 6426 characters on the individual user record is now populated in the CLIENT_INFO
field of v$session.
v$session If the connection is idle, the column is blank. A value of up to 64 characters can be used. If
the database tag is not used this value is blank. For example:

Figure 25 – Example Database Tag characteristic

An example27 of this monitoring information from v$session,


v$session which shows the connection from SYSUSER executing
CILTUSEP):
CILTUSEP
a read transaction on the User object (CILTUSEP

Figure 26 – Example Pool tolerance configuration

26 Refer to the v$session view parameters for the length for the version of Oracle Database used.
27 Example does not include Database Tag, which is not set by default.

91 - Technical Best Practices - Oracle Utilities Application Framework


Information in these columns is only populated when an active transaction is actually executing against the
database. When a connection is idle, this status is indicated in the MODULE field.

Consider changing Bit Map Tree parameter


Some sites have reported that in Oracle Database 10g and above, the default for the hidden oracle parameter
_b_tree_bitmap_plans changed from false to true.

Setting the parameter to true enables bitmap plans to be generated for tables with only B-Tree indexes. The Cost
Based Optimizer can choose to use bitmap access paths without the existence of bitmap indexes and in order to do
so; it uses BITMAP CONVERSION FROM ROWIDS and BITMAP CONVERSION TO ROWIDS operations. Those
operations are CPU intensive. If a query in the product for which those operations are performed selects a small
number of rows, then there should not be much of an impact. However, if those queries select a large number of
rows, there may be a negative impact on performance. In order to prevent issues, if you are facing any such issues,
this parameter should be explicitly set to false either at the database level.

OraGenSec command line Parameters


Most sites use the OraGenSec utility in interactive mode but there are command line options that can be used for
silent installation. The command line is as follows:

OraGenSec -d <Owner,OwnerPswd,DbName> -u <Database Users> -r <ReadRole,UserRole> -l


OraGenSec
<logfile> -h

Where:

-d <Owner,OwnerPswd,DbName> Database connect information for the target


database. e.g. spladm,spladm,DB200ODB.
spladm,spladm,DB200ODB

-u <Database Users> A comma-separated list of database users where


synonyms need to be created. e.g. spluser,
splread

-r <ReadRole,UserRole> Optional. Names of database roles with read and


SPL_READ,
read-write privileges. Default roles are SPL_READ
SPL_USER.
SPL_USER e.g. spl_read,spl_user

-l <logfile> Optional. Name of the log file.

-h Help

This command line can be used in site specific DBA scripts or as a standalone command line. Executing the utility
without any options starts interactive mode.

Building the Data Model


One of the common questions regarding product is the availability of the total data model in a particular tool (such as
Oracle Data Modeler or similar). The product contains a large number of tables and it is generally impractical to
display a full model due to its size (to make it legible).

There are a number of sources of information that can replace a full data model and present the data mode
information into bite sized chunks:
» The data model information is contained in the Data Dictionary component of the Application Viewer.

92 - Technical Best Practices - Oracle Utilities Application Framework


» The Conversion documentation, available in Microsoft Word as well as online help contain a summary set of data
models that basically outline the major entities in the product.

Note:
Not all Oracle Utilities Application Framework products include a conversion capability.

» Each of the Business Process manuals for the product outlines the functionality and contain data models
specifically for that component.

Why is there no referential integrity built into the database?


Typically referential integrity of a database is managed by the database itself. In product this is not so as the
Maintenance Objects contain ALL the business logic including referential integrity. The reasons for this are varied:

» From a maintenance cost point of view, all the code is in one place. This reduces maintenance effort.
» Databases implement all or nothing referential integrity. This means that referential integrity is checked whether
the data has changed or not. From a performance point of view this is potentially wasting time. The Maintenance
Objects in product decide when to enforce referential integrity rules.
» Most of the referential rules in product are optional. If there is a value in the foreign key field it is checked, if there
is no value (blanks, zero or nulls) then the referential integrity is not checked unless it is a mandatory column. This
is not possible in database imposed referential integrity.
» If the database controlled referential integrity then the application has no control on when it is imposed in the
course of a transaction. Maintenance Object controlled referential integrity allows finer levels of control on when
referential integrity is enforced in the transaction flow.
» Each database implements referential integrity in a slightly different way. To reduce maintenance costs, code
differences are kept to a minimum.
» Maintenance Object enforced referential integrity is more efficient as far as product is concerned and translates to
superior performance across many database types.

Building the Data Model


All is not lost though. The Oracle Utilities Application Framework maintains its own data dictionary in the form of
meta-data that is used by the Oracle Utilities Software Development Kit.

If you insist that you want the data model in a tool or adorning a large wall then the following is recommended
process to be used to generate the data model using the meta-data:

» Export the CISADM schema (with no data) as a backup using the database export utility or use SQL Developer
to clone the schema to a ghost schema.
» Create constraints from the meta-data structure. The two Oracle pl/sql scripts below can be used to achieve this.
The names of the constraints is already documented in the meta data as well. Run the utility and created the
constraints in the database.

Function to join
create or replace function join

(
p_cursor sys_refcursor,

p_del varchar2 := ','

) return varchar2
is

93 - Technical Best Practices - Oracle Utilities Application Framework


l_value varchar2(32767);

l_result varchar2(32767);
begin

loop

fetch p_cursor into l_value;


exit when p_cursor%notfound;

if l_result is not null then

l_result := l_result || p_del;


end if;

l_result := l_result || l_value;

end loop;
return l_result;

end join;

/
show errors;
Script to Create Constraints
SET serverout ON size 1000000

SET echo OFF


SET feedback OFF

SET linesize 300

spool constraints.sql
DECLARE

CURSOR c1

IS
SELECT tbl_name,

CONST_ID,

REF_CONST_ID,
table_name

FROM ci_MD_CONST,

user_indexes

94 - Technical Best Practices - Oracle Utilities Application Framework


WHERE CONST_TYPE_FLG='FK'

AND TRIM(index_name)=SUBSTR(REF_CONST_ID,5,7) AND


LENGTH(TRIM(REF_CONST_ID))>8

AND TRIM(tbl_name) IN (SELECT TRIM(table_name) FROM user_tables)

UNION ALL
SELECT tbl_name,

CONST_ID,

REF_CONST_ID,
table_name

FROM ci_MD_CONST,

user_indexes
WHERE CONST_TYPE_FLG='FK'

AND TRIM(index_name)=TRIM(REF_CONST_ID) AND


LENGTH(TRIM(REF_CONST_ID))<=8
AND TRIM(tbl_name) IN (SELECT TRIM(table_name) FROM user_tables)

UNION ALL

SELECT UNIQUE tbl_name,


CONST_ID,

REF_CONST_ID,

table_name
FROM ci_MD_CONST,

user_indexes

WHERE CONST_TYPE_FLG='FK'
AND TRIM(index_name)=TRIM(REF_CONST_ID)

AND TRIM(tbl_name) IN (SELECT TRIM(table_name) FROM user_tables)

ORDER BY 1;
---

stmt VARCHAR2(400);

field_list VARCHAR2(300);
BEGIN

FOR r1 IN c1

95 - Technical Best Practices - Oracle Utilities Application Framework


LOOP

stmt := 'alter table ' || trim(r1.tbl_name) || ' add constraint ' ||


trim(r1.const_id);

dbms_output.put_line(stmt);

SELECT
JOIN(CURSOR

(SELECT trim(fld_name)

FROM ci_md_const_fld
WHERE const_id = r1.const_id

ORDER BY seq_num

))
INTO field_list

FROM dual;

stmt := 'foreign key (' || field_list || ')';


dbms_output.put_line(stmt);

SELECT

JOIN(CURSOR
(SELECT trim(fld_name)

FROM ci_md_const_fld

WHERE const_id = r1.ref_const_id


ORDER BY seq_num

))

INTO field_list
FROM dual;

stmt := 'references ' || trim(r1.table_name) || ' (' || field_list ||


');';
dbms_output.put_line(stmt);

END LOOP;

END;
/

spool OFF;

96 - Technical Best Practices - Oracle Utilities Application Framework


EXIT;
» Run the constraints.sql file created in the previous step to create the RI using the ghost schema owner.
» Load the data model in the tool of your choice. Load the data model with the constraints in the desired tool. This
should build the data model.

Note:
This may take a while for the WHOLE data model.

» Removed the newly created constraints. This is to return the database back to the original condition.
set serverout on size 1000000

set echo off


set feedback off

set linesize 300

spool drop_constraints.sql
select 'ALTER ' || tbl_name || ' drop constraint ' || CONST_ID || ';'
from ci_MD_CONST where CONST_TYPE_FLG='FK' order by tbl_name, CONST_ID;

spool off;
@drop_constraints.sql

exit;
Reload the database. You then have the data model in your tool and the database returned to its original state.

Configuring Real Application Cluster Support


In Oracle Utilities Application Framework V4.0 and above, native support for Real Application Clusters (RAC)
support was added. This means that the database component of the product can be installed in a RAC setup and
the product configured to utilize the facilities of RAC (including Fast Connection Failover)28.

To configure RAC Support for the Oracle Utilities Application Framework configuration:

» Ensure that the database component of the product has been setup using Real Application Clustering to your site
standards with at least one node (RAC One Node can also be used if required).
» Configure the location of the ons.jar file to be used for the installation for the product installation in ONS JAR
Directory menu option. If you have installed the Oracle on a server other than the one the product is installed
upon, then it is recommended to copy the ons.jar file to an accessible location on the server containing the
product. By default the ons.jar file is located in the ons directory under ORACLE_HOME.
ORACLE_HOME

Note:
The administration user used for the product MUST have read permission at least to this file.

In the Database Configuration section of the configureEnv[.sh] utility specify the following:
» For the Database Server and Database Port specify any value (as they will be ignored).

28 In Oracle Utilities Application Framework V4.1 and above, the Universal Connection Pool (UCP) was used for database connectivity. Configuration of
FCF should take this into account (as per the indicated documentation).

97 - Technical Best Practices - Oracle Utilities Application Framework


ONSCONFIG)
ONSCONFIG specify the list of servers and ONS port numbers
» For the ONS Server Configuration menu option (ONSCONFIG
," with the individual node specification in format
for the RAC configuration with each node delimited by a ",
<host>:<port> where <host> is the ONS host name for the node and the <port> is the ONS port number
assigned to the host. For example:
host1.utility.com:6250,host2.utility.com:6250
DB_OVERRIDE_CONNECTION)
DB_OVERRIDE_CONNECTION string specify the RAC format JDBC
» For the Database Override Connection (DB_OVERRIDE_CONNECTION
connection string (full URL or TNSAlias29 can be used). This will provide the override. For example:
jdbc:oracle:thin:@(DESCRIPTION=(LOAD_BALANCE=ON)(ADDRESS=(PROTOCOL=TCP)
(HOST=host1.utility.com)(PORT=1521))(ADDRESS=(PROTOCOL=TCP)(HOST=host2.
utility.com)(PORT=1521))(CONNECT_DATA=(service_name=MYRAC)))
» The configuration of the product is complete (after the configuration is saved and applied using the
initialSetup[.sh] utility.

Using Database Compression


One of the features of the database is ability the compress the data. The compression facilities in Oracle have a
number of attractive features:

» Compression applies to disk as well as memory (in the database buffer cache). This saves both storage costs and
means that more data can be loaded into memory. Whilst there is a CPU overhead with compression, due to the
compression and decompression activities, the memory processing savings may cancel out any overhead to yield
performance improvements.
» Compressed data can co-exist with uncompressed data. For example, compression can be enabled so that any
new data is compressed automatically30.
Oracle offers various levels of compression.
» Basic – Each edition includes a basic compression algorithm.
» Advanced Compression – Advanced Compression is an option on the Enterprise Edition of Oracle and offers
higher levels of compression as well as compression optimized for various activities such as OLTP.
» Hybrid Columnar Compression – This is the newest and most advanced compression algorithm that offers the
highest level of compression and compression optimizations. At the present time this is offered within Oracle
ExaData with hardware assisted compression, to reduce compression CPU overheads.
» Compression can be used with Oracle Utilities Application Framework products as it is transparent to the
underlying product code with all the options described above. Guidelines for compress are available from the
Database Administration Guide. Implementation advice is available in the Advanced Compression with Oracle
11g whitepaper.

Monitoring Best Practices


The Oracle Utilities Application Framework includes a wide range of product and infrastructure monitoring
capabilities to allow sites to manage and monitor the implementations.

Product Monitoring Capabilities

29 To use TNSAlias the Oracle Client (or Oracle Database) must be installed on the application server machine to provide the TNS infrastructure and
this installation must be specified in the ORACLE_HOME parameter for the installation.
30 To compress the whole table in this example, it would have to be exported, truncated and reloaded to compress all records.

98 - Technical Best Practices - Oracle Utilities Application Framework


The Oracle Utilities Application Framework is housed in the J2EE architecture and supports both product specific
monitoring and monitoring the infrastructure itself. The following table lists the monitoring capabilities as well as a
summary of the metrics that can be monitored:

Capability Usage Tiers Example Metrics

Log Monitoring Error message monitoring All Error messages

Access Log HTTP session monitoring Web Transaction usage, session analysis, error rates,
bandwidth usage and click stream analysis.

Security Logs Security Auditing information Web Authentication attempts, lockouts, etc

Oracle WebLogic JMX Monitoring metrics from WebLogic metrics Web, IWS Active Sessions, Request Processing Time,
palette Requests per minute, Pending Requests, Stuck
Threads, Application Status, Certificate Expiry,
JDBC Open Connections, JDBC Free Connections,
JDBC Connection Waivers, JDBC Connections
Closed, JDBC Cache Statements Used, JDBC
Connection Leaks, JDBC Connection Pool Size,
JDBC Connection Request Failures, JDBC
Requests That Waited, JDBC Connections Wait
Success, JDBC Successful Connections (%), Data
Source State, MDB Messages per minute, JMS
Connections, JMS Messages Pending (MDB), JMS
Current Messages, Request Processing Time by
Servlet/JSP, Response Time by Servlet/JSP

JVM JMX Metrics available from any JVM via All CPU Usage, Active Threads, Free Heap, heap
java.lang.management API Size, Nursery Size, Garbage Collector Invocations
per minute, Garbage Collector Invocation Time,
Garbage Collector Execution Time, Garbage
Collector Old Heap Percent Free, Garbage
Collector Percent Time Spent.

Oracle WebLogic Web Web Services and Web Services Manager IWS Execution Time, Invocation Time, Response Count,
Services Metrics specific Metrics (by operation) Response Error Count, Response Time.

Online JMX Business Application Server JMX capability Web, IWS Read Count, Delete Count, Change Count, Add
Count, Default Item Count, Execute BO Count,
Execute BS Count, Execute List Count, Execute
Search Count, Read System Count, Validate
Count, Execute SS Count, Minimum Elapsed Time
per service per transaction type, Maximum Elapsed
Time per service per transaction type, Average
Elapsed Time per service per transaction type.

Batch JMX Batch Cluster JMX capability Batch Thread Count, Member List, Batch Elapsed Time
per thread, Batch Throughput per thread, Number
Processed per thread, Error Number per thread

Operating System Basic operating system metrics All CPU, Memory, Run Queue Length, Disk metrics

Database Basic database monitoring Database Refer to the Database Metrics Manual for full list.

Note:
These metrics are discussed in the Server Administration Guide as well as the Performance

99 - Technical Best Practices - Oracle Utilities Application Framework


Troubleshooting Guideline Series (Doc Id: 560382.1) available from My Oracle Support.

Using Oracle Enterprise for Monitoring


Whilst any JMX capable or monitoring console can be used for monitoring, it is recommended to use the Oracle
Enterprise Cloud Control product with the Application Management Pack for Oracle Utilities to monitor Oracle
Utilities Application Framework products. These have the following advantages:
» Metrics Palettes – The technology outlined above is expressed in a series of metrics palettes that automatically
connect to the underlying technology to expose metrics. These palettes can be used across many of the different
monitoring capabilities in the Oracle Enterprise Cloud Control console.
» Dashboard Support – It is possible to specify what metrics are to be tracked based upon each individual’s job
responsibilities. These can then be expressed in either in the formats of a graph (with or without thresholds
overlaid) or as a list of values.
» Advanced Threshold Management – Support for different thresholds for different time periods against any
metric.
» Metrics Extensions – Combining or customizing an individual metric collection for advanced analysis.
» Alerting – Using thresholds and SLA definitions against a particular metric for generating alerts or incidents.

Note:
Refer to Oracle Application Management Pack for Oracle Utilities Overview (Doc Id: 1474435.1) available
from My Oracle Support for a summary of the functionality available.

Appendix
This section primarily outlines advice for customers who are using versions of Oracle Utilities Application Framework
supporting COBOL runtimes. In Oracle Utilities Application Framework V4.3.x and above, COBOL is no longer
supported as a development or runtime language.

Note:
Not all products support COBOL based extensions; therefore this appendix may not apply. Check with your
installation guide for more details.

Checking COBOL Installation


By default, when the COBOL runtime is installed a license file is required to complete the installation as outlined in
the Quick Installation Guide supplied with the product. The license can be tracked using the process outlined in the
Installation Guide or the following command:

cobsje -J $JAVA_HOME

Note:
This command should be executed AFTER executing the splenviron[.sh] utility to initialize the
PATH.
environment variables used by the utilities and place the COBOL runtime in the PATH

If the license is NOT installed the response should be similar to the text below:

100 - Technical Best Practices - Oracle Utilities Application Framework


Error - No license key detected. Application Server requires a license
key in order to execute.
Please refer to your application supplier.
Well this message indicates that there is an issue dealing with the license key on the server. If this message
appears to remedy the situation it is recommended that the COBOL runtime be re-installed and re-initialized the
license key using apptrack as per the Installation Guide for the product.

If the license key is installed correctly the cobjse utility will return a message similar to the following:

#> cobsje -J $JAVA_HOME


Java version = 1.7.0_75
Java vendor = Oracle Corporation
Java OS name = Linux
Java OS arch = amd64
Java OS version = 3.8.13-16.2.1.el6uek.x86_64
Additionally the 64 Bit version of COBOL is required to be used for 64 bit platforms as indicated in the Installation
Guide for the product. To verify that the COBOL runtime is 64 bit use the following command:

cob –v
This should return the output similar to the following:

cob64 -C nolist -v
I see no work
The cob64 indicates the use of 64 bit COBOL.

COBOL License Errors in Batch


Note:
Not all products support COBOL based extensions; therefore this section may not apply. Check with your
installation guide for more details.

If the product has COBOL based background processes and the COBOL license is not installed correctly (see
Checking COBOL Installation for more details) then an error message similar to the example below will be
displayed:

…cobjrun64: com.splwg.base.api.batch.ThreadPoolWorker.main ended due to


an exception

Exception in thread "main" com.splwg.shared.common.LoggedException:

The following stacked messages were reported as the LoggedException was


rethrown:

com.splwg.base.support.context.ContextFactory.createDefaultContext(Cont
extFactory.java:569): error initializing test context

101 - Technical Best Practices - Oracle Utilities Application Framework


To resolve this issue refer to the instructions in the Quick Installation Guide about installing the COBOL license.

Writing Files Greater than 4GB


Note:
This advice applies to products that use the COBOL support contained within the Oracle Utilities
Application Framework. 64 Bit java based code automatically supports files greater than 4GB.

Note:
This change should not be attempted if the interface using the file is 32 bit as this only applies to 64 bit
COBOL on a 64 Bit operating system.

By default, any 64 bit COBOL based extract product process will create a file up to a 4GB limit. In the unlikely event
that the extract process needs to create a file bigger than 4GB there is a way of instructing the COBOL runtime to
support larger files.

You must create a text based extension configuration file (say cmextfh.cfg)
cmextfh.cfg with the following contents:

[XFH-DEFAULT]

FILEMAXSIZE=8

IDXFORMAT=8
You then place this configuration file in a location that can be referred to by the runtime. You can either deposit the
file in $SPLEBASE/scripts (or %SPLEBASE%\
%SPLEBASE%\scripts)
scripts or in a site specific central location. To enable support
for larger formats your initialize the EXTFH environment variable with the location of the configuration file. For
example:

set EXTFH=D:\
EXTFH=D:\oracle\
oracle\TUGBU\
TUGBU\scripts\
scripts\cmextfh.cfg ( for Windows)

export EXTFH=/oracle/TUGBU/scripts/cmextfh.cfg (for Linux/UNIX)

This can be done in your .profile (for Linux/UNIX) or using the facilities outlined in Custom Environment Variables or
JAR files.

For additional details and additional parameters refer to My Oracle Support Doc Id: 817617.1.

Number of Child JVMS


By default, there are two (2) COBOL based Child JVM's spawned by the product for each of the online, XAIApp and
the background processing components of the product. This is the minimum recommended for availability and
performance of the product in normal conditions.

It is worth considering more instances of the Child JVM's if any of the following situations occur:

The site has a large number of users (>800) which use a large proportion of the product over the business day. In
this case there are a lot of potential calls to COBOL modules by different users and to avoid out of memory
conditions it is important to have more child JVM's available. This situation can also be negated by the presence of
more than one Web Application Server as each Web Application Server has its own Child JVM's.

102 - Technical Best Practices - Oracle Utilities Application Framework


If the product functionality used at the site is across a majority of the product then the number of unique COBOL
modules that may be called may be more than expected and extra Child JVM's may be required to avoid out of
memory situations.

In most cases the default value for the number of Child JVM's is sufficient for most non-production situations. Refer
to the Production Environment Configuration Guidelines for production level settings.

COBOL Memory management


The Child Java Virtual Machines (JVM) used to provide the Java to COBOL interface requires a number of key
memory management features unique to the Oracle Utilities Application Framework. Typically COBOL (and other
languages) runs natively on an operating system. In the case of the COBOL used by the Oracle Utilities Application
Framework based product, it runs with a runtime set of libraries provided by Microfocus COBOL. The Oracle Utilities
Application Framework wraps this COBOL in a JVM to facilitate the Java to COBOL interface. Unfortunately as
COBOL typically assumes it is running natively so when running within a JVM the control of the COBOL process
falls to the Oracle Utilities Application Framework which has limited control of the underlying processes.

The COBOL processes (expressed as shared libraries and executables on the operating system) typically are
attached to the JVM when they are first executed and remain attached as long as the JVM is executing for reuse.
This has an unfortunate consequence in that the thread bound memory used by those COBOL objects cannot be
released until the parent process (in this case the JVM) has stopped executing (i.e. dies). This thread-bound
memory is primarily memory allocated by the Microfocus runtime on the C heap. As threads return to the thread
pool and are used again to process calls to different COBOL objects, the memory footprint may continue to grow as
different COBOL objects are called. Over time it may be the case that each thread allocates memory for the
complete set of objects. If not managed correctly this situation can lead to out of memory conditions.

As the Child JVM has limited control over individual object a number of key elements have been added to the Oracle
Utilities Application Framework (that require configuration) to optimize memory management of the Child JVMs:

Load is balanced across the available Child JVM's allocated to the product using a round robin technique to reduce
the impact of memory increases.

Child JVM's reuse existing loaded modules as much as possible. An individual module that has been called is only
attached once per Child JVM at any given time.

An installation parameter in the Environment Configuration called Release Cobol Thread Memory controls this
behavior. This value should be set to true. This can be overridden for each mode of access (online, batch and XAI)
by specifying the spl.runtime.cobol.remote.releaseThreadMemoryAfterEachCall parameter in the
spl.properties file.

Note:
Refer to the Batch Best Practices (Doc Id: 836362.1) whitepaper available from My Oracle Support for
advice pertaining the optimal setting of this parameter for background processes.

To reclaim memory of the COBOL objects, the Child JVM must be shunned (stopped and restarted) on a regular
basis. This is known as brute force memory management. The Oracle Utilities Application Framework allows control
of this in the relevant spl.properties file by setting the following parameters:

Parameter Comments

spl.runtime.cobol.remote.jvmMaxLifetimeSecs Number of seconds between automated shunning of the Child

103 - Technical Best Practices - Oracle Utilities Application Framework


JVM

spl.runtime.cobol.remote.jvmMaxRequests Number of COBOL calls between automated shunning of the


Child JVM

As soon as either tolerance is met the Child JVM is shunned automatically. This does not necessarily occur
straightaway as it waits for any uncompleted outstanding work in the individual Child JVM to complete. As the
product uses more than one Child JVM at any time, availability is not compromised as at least one Child JVM is
active at any time.

The default values for these parameters are sufficient for most sites. Refer to the Server Administration Guide
supplied with the product for the default values and additional advice on this facility.

With the above facilities the COBOL memory within the Child JVM can be managed by the Oracle Utilities
Application Framework to help avoid memory issues.

Killing Stuck Child JVM's


Note:
To use this facility in Oracle Utilities Application Framework V4.1, the Group Fix 4 (as Patch 13640668)
must be installed.

In some situations, the Child JVM's may spin. This causes multiple startup/shutdown Child JVM messages to be
displayed and recursive child JVM's to be initiated and shunned. If the following:

Unable to establish connection on port …. after waiting .. seconds.


The issue can be caused intermittently by CPU spins in connection to the creation of new processes, specifically
Child JVMs. Recursive (or double) invocation of the System.exit call in the remote JVM may be caused by a
Process.destroy call that the parent JVM always issues when shunning a JVM. The issue may happen when the
thread in the parent JVM that is responsible for the recycling gets stuck and it affects all child JVMs.

If this issue occurs at your site then there are a number of options to address the issue:

» Configure an Operating System level kill command to force the Child JVM to be shunned when it becomes stuck.
» Configure a Process.destroy command to be used if the kill command is not configured or desired.
» Specify a time tolerance to detect stuck threads before issuing the Process.destroy or kill commands.

Note:
This facility is also used when the Parent JVM is also shutdown to ensure no zombie Child JVM's exit.

The following additional settings must be added to the spl.properties for the Business Application Server to use
this facility:
» spl.runtime.cobol.remote.kill.command – Specify the command to kill the Child JVM process. This can
be a command or specify a script to execute to provide additional information. The kill command property can
accept two arguments, {pid} and {jvmNumber},
{jvmNumber} in the specified string. The arguments must be enclosed in
curly braces as shown here.

Note:
The PID will be appended to the killcmd string, unless the {pid} and {jvmNumber} arguments are
specified. The jvmNumber can be useful if passed to a script for logging purposes.

104 - Technical Best Practices - Oracle Utilities Application Framework


Note:
If a script is used it must be in the path and be executable by the OS user running the system.

» spl.runtime.cobol.remote.destroy.enabled – Specify whether to use the Process.destroy


command instead of the kill command. Specify true or false.
false Default value is false.
false

Note:
Unless otherwise specified it is recommended to use the kill command option if shunning JVM's is an
issue. There this value can remain its default value, false, unless otherwise required.

» spl.runtime.cobol.remote.kill.delaysecs – Specify the number of seconds to wait for the Child JVM
to terminate naturally before issuing the Process.destroy or kill commands. Default is 10 seconds.
For example:

spl.runtime.cobol.remote.kill.command=kill -9 {pid} {jvmNumber}

spl.runtime.cobol.remote.destroy.enabled=false

spl.runtime.cobol.remote.kill.delaysecs=10
» When a Child JVM is to be recycled, these properties are inspected and the
spl.runtime.cobol.remote.kill.command,
spl.runtime.cobol.remote.kill.command executed if provided. This is done after waiting for
spl.runtime.cobol.remote.kill.delaysecs seconds to give the JVM time to shut itself down. The
spl.runtime.cobol.remote.destroy.enabled property must be set to true AND the
spl.runtime.cobol.remote.kill.command omitted for the old Process.destroy command to be used
on the process.

Note:
By default the spl.runtime.cobol.remote.destroy enabled is set to false and is therefore
disabled.

» If neither spl.runtime.cobol.remote.kill.command nor


spl.runtime.cobol.remote.destroy.enabled is specified, child JVMs will not be forcibly killed. They will
be left to shut themselves down (which may lead to orphan JVMs). If both are specified, the
spl.runtime.cobol.remote.kill.command is preferred and
spl.runtime.cobol.remote.destroy.enabled defaulted to false.
false
» It is recommended to invoke a script to issue the direct kill command instead of directly using the kill -9
commands.
For example, the following sample script ensures that the process Id is an active cobjrun process before issuing
the kill command:
forcequit.sh
#!/bin/sh

THETIME=`date +"%Y-%m-%d %H:%M:%S"`

if [ "$1" = "" ]

then

105 - Technical Best Practices - Oracle Utilities Application Framework


echo "$THETIME: Process Id is required"
>>$SPLSYSTEMLOGS/forcequit.log
exit 1

fi

javaexec=cobjrun
ps e $1 | grep -c $javaexec

if [ $? = 0 ]

then
echo "$THETIME: Process $1 is an active $javaexec process -- issuing
kill

-9 $1" >>$SPLSYSTEMLOGS/forcequit.log
kill -9 $1

exit 0

else
echo "$THETIME: Process id $1 is not a $javaexec process or not
active --

kill will not be issued" >>$SPLSYSTEMLOGS/forcequit.log


exit 1

fi

Note:
The above script is a sample only.

» This script's name would then be specified as the value for the spl.runtime.cobol.remote.kill.command
property, for example:
spl.runtime.cobol.remote.kill.command=forcequit.sh
» The forcequit script does not have any explicit parameters but {pid} is passed automatically.
» To use the jvmNumber parameter it must explicitly specified in the command. For example, to call script
forcequit.sh and pass it the {pid} and the child JVM number ({jvmNumber}), specify it as follows:
spl.runtime.cobol.remote.kill.command=forcequit.sh {pid} {jvmNumber}
» The script can then use the JVM number for logging purposes or to further ensure that the correct pid is
being killed.
» If the arguments are omitted, the {pid} is automatically appended to the
spl.runtime.cobol.remote.kill.command string.

106 - Technical Best Practices - Oracle Utilities Application Framework


107 - Technical Best Practices - Oracle Utilities Application Framework
Oracle Corporation, World Headquarters Worldwide Inquiries
500 Oracle Parkway Phone: +1.650.506.7000
Redwood Shores, CA 94065, USA Fax: +1.650.506.7200

CONNECT W ITH US

blogs.oracle.com/theshortenspot Copyright © 2007-2017, Oracle and/or its affiliates. All rights reserved. This document is provided for information purposes only, and
the contents hereof are subject to change without notice. This document is not warranted to be error-free, nor subject to any other
facebook.com/oracle warranties or conditions, whether expressed orally or implied in law, including implied warranties and conditions of merchantability or
fitness for a particular purpose. We specifically disclaim any liability with respect to this document, and no contractual obligations are
formed either directly or indirectly by this document. This document may not be reproduced or transmitted in any form or by any
twitter.com/theshortenspot means, electronic or mechanical, for any purpose, without our prior written permission.

oracle.com Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Intel and Intel Xeon are trademarks or registered trademarks of Intel Corporation. All SPARC trademarks are used under license and
are trademarks or registered trademarks of SPARC International, Inc. AMD, Opteron, the AMD logo, and the AMD Opteron logo are
trademarks or registered trademarks of Advanced Micro Devices. UNIX is a registered trademark of The Open Group. 1216

108 - Technical Best Practices - Oracle Utilities Application Framework

S-ar putea să vă placă și