Documente Academic
Documente Profesional
Documente Cultură
Introduction 1
Caveat 1
Introduction 2
Architectural Changes 3
Environment Practices 4
Disabling protocols 10
System Wide 31
Transaction Timeouts 32
Configuration of timeouts 33
Implementing guidelines 34
Security Configuration 36
Overload Protection 42
Resource Management 43
Compression 48
Database Clustering 48
Popup Blockers 50
Network bandwidth 51
Load balancers 55
Preload or Not? 56
Clustering or Managed? 63
Corrupted SPLApp.war 69
CLIENT-CERT Support 73
Function to join 93
Appendix 100
While all care has been taken in providing this information, implementation of the practices outlined in this document
may NOT guarantee the same level of (or any) improvement. Some of these practices may not be appropriate for
your site. It is recommended that each practice be examined in light of your particular organizational policies and
use of the product. If the practice is deemed beneficial to your site, then consider implementing it. If the practice is
not appropriate (e.g. for cost and other reasons), then it should not be considered.
Note:
For publishing purposes, the word product will be used to be indicate all Oracle Utilities Application
Framework based products.
Note:
This whitepaper has been updated to include advice for all versions of Oracle Utilities Application
Framework V4.x. This document does not include any advice for Oracle Utilities Application Framework
V2.x.
Caveat
While all care has been taken in providing this information, implementation of the practices outlined in this document
may NOT guarantee the same level of (or any) improvement. Not all practices outlined in this document will be
appropriate for your site. It is recommended that each practice be examined in light of your particular organizational
policies and use of the product. If the practice is deemed beneficial to your site, then consider implementing it. If the
practice is not appropriate (e.g. for cost and other reasons), then it should not be considered.
Advice or instructions marked with this icon apply to Oracle Utilities Application Framework
V4.0 based products and above.
Advice or instructions marked with this icon apply to Oracle Utilities Application Framework
V4.1 based products and above.
Advice or instructions marked with this icon apply to Oracle Utilities Application Framework
V4.2.0.0.0 based products and above.
Advice or instructions marked with this icon apply to Oracle Utilities Application Framework
V4.3.0.0.0 based products and above.
Note:
In some sections of this document the environment variable $SPLEBASE (or %SPLEBASE%)
%SPLEBASE% is used. This
denotes the root location of the product install. Substitute the appropriate value for the environment used at
your site.
Introduction
Implementation of the product at any site introduces new practices into the IT group to maintain the health of the
system and provide the expected service levels demanded by the business. While configuration of the product is
important to the success of the implementation (and subsequence maintenance), adopting new practices can help
ensure that the system will operate within acceptable tolerances and support the business goals.
This white paper outlines some common practices that have been implemented at sites, around the globe, that have
proven beneficial to that site. They are documented here so that other sites may consider adopting similar practices
and potentially deriving benefit from them as well.
The recommendations in this document are based upon experiences from various sites and internal studies, which
have benefited from implementing the practices outlined in the document.
When Oracle Utilities Customer Care & Billing was migrated from V1 to V2, it was decided that the technical aspects
of that product be separated to allow for reuse and independence from technical issues. The idea was that all the
technical aspects would be concentrated in this separate product (i.e. a framework) and allow all products using the
framework to concentrate on delivering superior functionality. The product was named the Oracle Utilities
oufw is the product code). The framework was then used across existing and new products
Application Framework (oufw
to support a common technology platform across Oracle Utilities.
The technical components are contained in the Oracle Utilities Application Framework which can be summarized as
follows:
» Metadata – The Oracle Utilities Application Framework is responsible for defining and using the metadata to
define the runtime behavior of the product. All the metadata definition and management is contained within the
Oracle Utilities Application Framework.
» UI Management – The Oracle Utilities Application Framework is responsible for defining and rendering the pages
and responsible for ensuring the pages are in the appropriate format for the locale.
» Integration – The Oracle Utilities Application Framework is responsible for providing the integration points to the
architecture. Refer to the Oracle Utilities Application Framework Integration Overview (Doc Id: 789060.1)
whitepaper available from My Oracle Support for more details.
There are a number of products from the Utilities Global Business Unit as well as from the Public Services Unit and
Financial Services Global Business Unit that are built upon the Oracle Utilities Application Framework. These
products require the Oracle Utilities Application Framework to be installed first and then the product itself installed
onto the framework to complete the installation process.
There are a number of key benefits that the Oracle Utilities Application Framework provides to these products:
» Common facilities – The Oracle Utilities Application Framework provides a standard set of technical facilities
that mean that products can concentrate in the unique aspects of their markets rather than making technical
decisions.
» Common methods of configuration – The Oracle Utilities Application Framework standardizes the technical
configuration process for a product. Customers can effectively reuse the configuration process across products.
» Common methods of implementation - The Oracle Utilities Application Framework standardizes the technical
aspects of a product implementation. Customers can effectively reuse the technical implementation process
across products.
» Quicker adoption of new technologies – As new technologies and standards are identified as being important
for the product line, they can be integrated centrally benefiting multiple products.
» Multi-lingual and Multi-platform - The Oracle Utilities Application Framework allows the products to be offered
in more markets and across multiple platforms for maximized flexibility
» Cross product reuse – As enhancements to the Oracle Utilities Application Framework are identified by a
particular product, all products can potentially benefit from the enhancement.
Note:
Use of the Oracle Utilities Application Framework does not preclude the introduction of product specific
technologies or facilities to satisfy markets. The framework minimizes the need and assists in the quick
integration of a new product specific piece of technology (if necessary).
Architectural Changes
Over the last few releases of the Oracle Utilities Application Framework the architecture has been optimized to take
advantage of the latest technological advances, provide flexibility and support varying deployment models. The
architectural changes over the last few releases include:
Note:
The advice in this whitepaper will cover the architectural principles outlined above.
If you are upgrading to a new version, read the new installation guide as well as it will contain instructions on how to
upgrade to the new version as well as details of what has been changed in the new version.
Note:
For customers who are upgrading, the installation of product and its related third party software is designed
so that more than one version of product can co-exist.
Environment Practices
When installing product at a site, each copy of product is regarded as an environment to perform a particular task or
group of tasks. Typically, without planning this can lead to a larger than anticipated number of environments. This
can have a possible negative flow on effect by increasing overall maintenance effort and increasing resource usage
(hardware and people), which may in turn cause delays in implementations. Customers to minimize the impact of
environments on their implementations have used the following advice:
» At the start of the implementation decide the number of environments to use. Keep this to a minimum and
consider sharing environments between tasks. Another technique associated with this is to specify an end date
for each environment. This is the date the environment can be removed from the implementation. This can force
rethinks on the number of environments that are to be used at an implementation and may force sharing.
» For each environment, consider the impact on the hardware and maintenance effort including the following:
» The time and resources it takes to install the environment.
» The time and resources it takes to keep the environment up to date including application of single fixes,
rollups/service packs and upgrades. Do not forget application and management of customization builds.
» The time and resources to maintain the configuration migration and information lifecycle management
facilities for multiple environments, if used at an implementation. This includes the setup and regular
migrations that will be performed.
» The time and resources it takes to backup and restore environments on a regular basis. In some
implementations, having different backup schemes for environments based upon tasks and update
frequency for that environment, i.e. more updated = more frequent backup, may provide some savings.
» The time and resources to manage the disk space for each environment including regular cleanups.
» Environments may be setup so that the database can be reduced to a single database instance with each
environment having a different schema/owner. This will reduce the memory footprint of the DBMS on the machine
but may reduce availability of the database instance is shut down (all environments are affected). For non-
production, most customers create a database instance for each environment or use pluggable databases in
Oracle 12c and above.
For example, if the conversion team wishes to have the ability to start, stop and monitor their own environments, you
can create another administrator account and install their copies of product using that userid. This allows the
conversion team to control their own environments. If you did not have the ability to use multiple administrators than
they may have access to all environments (as you would have to give them access to the splsys account).
One of the advantages of this approach is that you can delegate management of a copy product to other teams
without compromising other environments. Another advantage is that you can quickly identify UNIX resource
ownership by user rather than trying using other methods.
The only disadvantage is that to manage all copies of product you will need to logon to the additional administration
accounts that own the various copies.
Note:
For Oracle Utilities Application Framework V4.3 and above, only one JDK is supported across the
architecture. Refer to the Installation Guide supplied with your product for versions supported.
When the product is installed one of the first perquisites to be verified is the version of Java installed and referenced
using the environment variable $JAVA_HOME (or %JAVA_HOME% on Windows). Whilst the product checks this
version it can be checked manually prior to installation (and at any time) using the following commands:
$JAVA_HOME/bin/java –version
Or (on Windows):
%JAVA_HOME%\
%JAVA_HOME%\bin\
bin\java –version
For example:
Linux:
AIX:
Windows:
C:\> %JAVA_HOME%\
%JAVA_HOME%\bin\
bin\java -version
Note:
Verify the java version number and operating mode (32/64 bit) against the Quick Installation Guide provided
with the product.
The log contains all the messages pertaining to the installation process including any error messages for installation
errors encountered. The log is located in the directory the installation was initiated from and the name is in the
format:
install_<product>_<environment>.log
Where:
<product> Product code of the product component you are installing. For example, FW = Oracle
Utilities Application Framework
Check this log for any error messages during the installation process.
The Oracle Client can be installed (if the product is not installed on a machine containing the Oracle Database
software) or an existing ORACLE_HOME can be specified if the Oracle Database software is installed already on the
machine (as it contains the Oracle Client in the installation). The value is stored in the ENVIRON.INI as the value
for parameter ORACLE_CLIENT_HOME.
ORACLE_CLIENT_HOME
Note:
In some versions of Oracle Utilities Application Framework, the 32 bit client MUST be also installed for use
with the database installation utilities. In Oracle Utilities Application Framework 4.3.0.4.0 and
above, the 64 bit database client is used.
If the Oracle Client or ORACLE_HOME is invalid then the following error will be returned by the installation utilities
(and other installs):
If the site decides to move between expanded mode2 to archive mode (or vice versa) on Oracle WebLogic
installations then when executing initialSetup[.sh] the product may report the following error:
Note:
Some of the instructions below recommend changes to individual configuration files. These manual
changes may be overridden by executions of the initialSetup[.sh] utility back to the product defaults.
To retain the changes across invocations of the initialSetup[.sh] utility it is recommended to use
custom templates and/or configuration file user exits. Refer to the Server Administration Guide for more
details of implementing custom templates and/or configuration file user exits.
Note:
Enabling https or t3s may result in higher resource usage due to the resource requirements to encrypt and
decrypt data. The extent of the resource usage will vary from platform to platform. It is advised that
customer compare performance between secure and non-secure protocols before committing to secure
protocols.
To implement the more secure protocol requires a number of changes and additional facilities to be enabled. The
process below outlines the generic process for implementing the secure protocol:
» Obtain a digital certificate or generate a certificate from keytool for your organization from a trusted certificate
authority. This is used for the encryption/decryption of data using the protocol.
Note:
The certificate provided with the J2EE Web Application Server installation is to be used for demonstration
purposes only. It is highly recommended that alternative certificate be used for production environments.
» Configure J2EE Web Application Server SSL support to use the certificate as outlined in the documentation sites
outlined below4:
» Enable the HTTPS port on your environment using the console provided with your J2EE Web Application Server.
Remember to reference the certificate you processed in the previous step.
Note:
For customers using Oracle WebLogic on Oracle Utilities Application Framework V4.1 and
above the setting for WebLogic SSL Port Number will enable this facility without the need of the console.
Note:
If changes are made to the console then to retain the change across upgrades and service packs it is
recommended to use custom templates or user exits to retain the setting. Refer to the Server
Administration Guide for more details of implementing custom templates.
web.xml Change references to the http protocol to https with the SSL port replacing the HTTP ports
web.xml.<channel> Change references to the http protocol to https with the SSL port replacing the HTTP ports
ejb-
ejb-jar.xml Change references to the http protocol to https with the SSL port replacing the HTTP ports. This file is
located under $SPLEBASE/splapp/businessapp/config/META-
$SPLEBASE/splapp/businessapp/config/META-INF (or
%SPLEBASE%
SPLEBASE%\
EBASE%\splapp\
splapp\businessapp\
businessapp\config\
config\META-
META-INF on Windows)
Note:
If these files are changed they may revert to the product template versions across service packs and
upgrades. To retain change across service packs and upgrades it is advised to use custom templates
and/or user exits. Refer to the Server Administration Guide supplied with your product for more details.
» Shutdown the J2EE Web Application Server to prepare to reflect the changes.
» Run the initialSetup[.sh] –w command to reflect the changes into the server files.
» Restart the J2EE Web Application Server.
» Ensure that any Feature Configuration options using the product browser that use the HTTP protocol as part of
their options are also converted to HTTPS and the appropriate port number. Use the Feature Configuration menu
option to check each of them. The Features will vary from product to product and version to version.
» Ensure that any Message JNDI Server provider URLS using the product browser that use the http/t3 protocol as
part of their options are also converted to https/t3s and the appropriate port number.
» Any customization that refers to the HTTP protocol such as custom algorithms or service scripts must also be
converted from HTTP to HTTPs. Refer to the Java Secure Socket Extension Reference Guide for more
information
Disabling protocols
Note:
For general advice on securing you production system, refer to Fusion Middleware Securing a Production
Environment for Oracle WebLogic Server.
If you are considering using secure protocols then you may want to disable non-secure protocols (as by default both
can be used). This would require configuration on the J2EE Web Application server to disable protocol that should
not be used.
Note:
The following instructions outline changes to configuration files used by the J2EE Web Application Server
that can be made manually to the configuration files supplied with the product or via the relevant
administration console supplied with the J2EE Web Application Server.
» Refer to the Configuring SSL section of the Oracle WebLogic documentation to be familiar with SSL. SSL needs
to be configured and verified for all access modes (including the Administration console) before disabling HTTP.
Note:
The HTTP methods described above are disabled automatically in Oracle Utilities Application Framework
V4.1 Group Fix 4 ( ) and above.
Note:
Please check that the administration console used at your site does NOT require the POST method for
POST.
HTTP before also disabling POST
In Oracle Utilities Application Framework 4.3.x and above, a number of Oracle WebLogic domain templates have
been provided to simplify the creation of the Oracle WebLogic domain for the installation. These templates can be
used with the Domain Creation wizard supplied with Oracle WebLogic. The templates are located in the
$SPLEBASE/tools/domaintempates. The following templates are supplied:
$SPLEBASE/tools/domaintempates
Template Usage
Simple A simple template with a single server housing administration and the product. This template is suitable to non-production
environments.
Complex A more complex template with a simple cluster for the product and separate administration server. This template is
suitable for an initial setup for a production system. Once established it is recommended to use the Oracle WebLogic
console to extend the domain according to your individual requirements.
The name of the domain templates adhere to the following naming convention:
Oracle-Utilities-<template>-Linux-<version>.jar
Where
<template> Simple or Complex)
Domain Template type (Simple Complex
<version> The version of Oracle WebLogic the template is optimized for. Ensure the correct version is used to ensure
optimization.
The use of domain templates simplifies the native installation process as outlined in Native Installation Oracle
Utilities (Doc Id: 1544969.1) available from My Oracle Support and the Installation Guide supplied with the product.
For example, it is not appropriate to allow people access to the production database through ad-hoc query tools (i.e.
such as SQL Developer, SQL*Plus etc). The freestyle nature of these tools can allow a single user to wreak havoc
on performance with a single inefficient SQL statement.
The database is not optimized for such unexpected traffic. Removal of this potentially inefficient access can typically,
improve performance.
The product contains a number of collection points in the architecture that are useful for real time and offline
collection of performance related data. Information on the collection points are documented in the Performance
Troubleshooting Guideline Series (Doc Id: 560382.1) whitepapers available from My Oracle Support. Using the
guide, decide which statistics are important to the various stakeholders at your site, decide the frequency of
collection and format of any output to be provided. Use your sites Service Level Agreement (SLA), if it exists, for
guidance on what to report.
The ownership of the record determines what you can do with that record:
» Framework - If the record is owned by Framework then implementation teams cannot alter or delete the record
from the database as it is deemed critical to the running of the Framework. This is usually meta-data deemed
important by the Framework team. For example the user SYSUSER is owned by the Framework.
» Product - If the record is owned by the product (denoted by the product name or Base) then some changes are
permitted but deletion is not permitted as the record as it is necessary for the operation of the product. The
amount of change will vary according to the object definition.
» Customer Modification - If the record is owned by Customer Modification then the implementation has added
the record. The implementation can change and delete the record (if it is allowed by the business rules).
Basically you can only delete records that are owned by Customer Modification. All other records are maintained by
various utilities supplied with the product as part of upgrade and patch deployments.
It is possible to alter or delete the records at the database level, if permitted by database permissions, but doing this
will produce unexpected results so respect the ownership of the records.
» A directory needs to be created to house the log files. Most sites create a common directory for all environments
on a machine. The size allocation of that directory will depend on how long you wish to retain the log files. It is
generally recommended that logs be retained for post analysis and then archived (according to site standards)
after processing to keep this directory relevant. Typically customers create a subdirectory under $SPLAPP
SPLAPP (or
%SPLAPP% for Windows platforms) to hold the files.
» Set the SPLBCKLOGDIR environment variable in the .profile (for all environments) or
$SPLEBASE/scripts/cmenv.sh (for individual environments) to the location you specified in the first step. For
Windows platforms then the environment can be set in your Windows profile or using
%SPLEBASE%/scripts/cmenv.cmd.
%SPLEBASE%/scripts/cmenv.cmd
» Logs will be backed up at the location specified in the format <datetime>.<environment>.<filename>
where <datetime> is the date and time of the restart, <environment> is the id of the environment (taken from
the SPLENVIRON environment variable) and <filename> is the original filename of the log.
Once the logs have been saved you must use log retention principles to manage the logs under SPLBCKLOGDIR to
meet your sites standards. Most sites archive the logs to tape or simply compress them after post processing the log
files (See Post Process Logs for more details on post processing).
If the logs are retained by your site (see Backup of Logs for details on this process), the consider post processing
the logs on a regular basis before they are archived or deleted permanently. One approach is to extract that
information from the logs and loading the extracted data into some analysis repository for regular and trend
reporting. The diagram below illustrates the process.
Details of the logs written by the product are documented in the Performance Troubleshooting Guideline Series (Doc
Id: 560382.1) whitepapers available from My Oracle Support. Use these guides to determine what data to extract
from the logs for post processing.
Viewing and checking for errors on a regular basis to quickly reduce the amount of error that may occur can detect
trends and common problems. The Performance Troubleshooting Guideline Series (Doc Id: 560382.1) whitepapers
available from My Oracle Support outlines the logs and error conditions contained within those logs.
Typically, the optimization of the operating system is performed during the implementation and uses the following
principles:
» The value of an individual operating system setting is the maximum value of any product on that machine. For
example, typically if Oracle is installed on the same machine, the values for those products are used. The settings
used in this way are usually are sufficient for the other products on that machine.
» If the machine is dedicated for a particular product or tier, then refer to the documentation in the installation guide
and the particular vendor's site for further advice on setting up the operating system in an optimal state.
During the implementation the size of the connection pools is determined and configured (with relevant growth
tolerances) depending on the usage patterns and expected peak/normal traffic levels. The goal, typically, is to have
enough connections available at normal traffic levels to minimize queuing and also have the right tolerances to cater
for any expected peak periods. Therefore, it is recommended:
Note:
Remember it is possible to set a different client connection pools per channel. For example, using Work
Managers you can limit online and/or web service calls.
» Database connections – These are the number of pooled connections to the database. The Framework holds
these connections open so that the overhead of opening and closing connections is minimized. For Version 2.x of
the product, the number of connections allocated is dictated in each individual web applications
hibernate.properties
hibernate.properties file using ucp or JDBC managed connection pools.
The figure below illustrates the connection pools available for each version of the Oracle Utilities Application
Framework:
Client
Client Connections
Web Application
Server
Business Application
Server
Database Connections
(via Hibernate/JDBC/UCP)
Database
Server
Refer to the Server Administration Guide provided with your product for advice on the configuration and monitoring
of the connection pools.
560382.1 Performance Troubleshooting Guideline Series A series of whitepapers outlining the tracking points available in the
architecture for performance and a troubleshooting guide based
upon common problems.
560401.1 Software Configuration Management Series This series of documents outlines a set of generic processes (that
can be used as part of the site processes) for managing code and
data changes. This series includes documents that cover concepts,
change management, defect management, release management,
version management, distribution of code and data, management of
environments and auditing configuration.
The individual whitepapers are as follows:
» Concepts - General concepts and introduction.
» Environment Management - Principles and techniques for
creating and managing environments.
» Version Management - Integration of Version control and version
management of configuration items.
» Release Management - Packaging configuration items into a
release.
» Distribution - Distribution and installation of releases across
environments
» Change Management - Generic change management processes
for product implementations.
» Status Accounting - Status reporting techniques using product
facilities.
» Defect Management - Generic defect management processes for
product implementations.
» Implementing Single Fixes - Discussion on the single fix
architecture and how to use it in an implementation.
» Implementing Service Packs - Discussion on the service packs
and how to use them in an implementation.
» Implementing Upgrades - Discussion on the upgrade process
5 In Oracle Utilities Application Framework V4.3.x and above, this guide has been merged with the Server Administration Guide
773473.1 Oracle Utilities Application Framework Security A whitepaper outlining the security facilities in the Oracle Utilities
Overview Application Framework.
774783.1 LDAP Integration for Oracle Utilities Application A whitepaper outlining the common process for integrating an
Framework based products external LDAP based security repository with the framework.
789060.1 Oracle Utilities Application Framework A whitepaper outlining all the various common integration techniques
Integration Overview used with the product (with case studies).
799912.1 Single Sign On Integration for Oracle Utilities A whitepaper outlining a generic process for integrating an SSO
Application Framework based products product with the Oracle Utilities Application Framework.
807068.1 Oracle Utilities Application Framework A whitepaper outlining the different variations of architecture that can
Architecture Guidelines be considered. Each variation will include advice on configuration
and other considerations.
836362.1 Batch Best Practices for Oracle Utilities A whitepaper outlining the common and best practices implemented
Application Framework based products by sites all over the world relating to batch.
970785.1 Oracle Identity Manager Integration Overview This whitepaper outlines the principals of the prebuilt integration
between Oracle Utilities Application Framework Based Products and
Oracle Identity Manager used to provision user and user group
security information.
1068958.1 Production Environment Configuration This whitepaper outlines common production level settings for
Guidelines Oracle Utilities Application Framework products.
1177265.1 What's New in Oracle Utilities Application This whitepaper outlines the changes since the V2.2 release of
Framework V4? Oracle Utilities Application Framework.
1290700.1 Database Vault Integration This whitepaper outlines the Database Vault integration available
with Oracle Utilities Application Framework V4.1 and above.
1299732.1 BI Publisher Integration Guidelines This whitepaper outlines some guidelines for integration available
with Oracle BI Publisher for reporting.
1308161.1 Oracle SOA Suite Integration This whitepaper outlines the integration between Oracle SOA Suite
and the Oracle Utilities Application Framework.
1308181.1 Oracle WebLogic JMS Integration This whitepaper outlines the inbuilt integration between Oracle
WebLogic JMS and the Oracle Utilities Application Framework.
1334558.1 Implementing Oracle ExaLogic and/or Oracle This whitepaper outlines how to cluster an Oracle Utilities Application
WebLogic Clustering Framework based product using Oracle WebLogic based clustering
including specific instructions for Oracle ExaLogic.
1375600.1 Oracle Identity Management Suite Integration This whitepaper outlines integration between the product and
components of Oracle Identity Management Suite.
1375615.1 Oracle Utilities Application Framework This whitepaper outlines common security requirements and outlines
Advanced Security how the security within the product and components of Oracle
Identity Management Suite can be used to implement that
requirement.
1474435.1 Oracle Application Management Pack for Oracle This whitepaper outlines the features and functions of the Oracle
Utilities Overview Application Management packs available for Oracle Enterprise
Manager.
1506855.1 Integration Reference Solutions This whitepaper outlines all the integrations with Oracle technology
possible with Oracle Utilities Application Framework with solution
strategies.
1544969.1 Installing OUAF natively on Oracle WebLogic A step by step guide to installing products within Oracle WebLogic
natively.
1558279.1 Oracle Service Bus Integration This whitepaper describes direct integration with Oracle Service Bus
including the new Oracle Service Bus protocol adapters available.
Customers using the MPL should read this whitepaper as the Oracle
Service Bus replaces MPL in the future and this whitepaper outlines
how to manually migrate your MPL configuration into Oracle Service
6
Bus .
1561930.1 Using Oracle Text for Fuzzy Searching This whitepaper describes how to use the Name Matching and fuzzy
operator facilities in Oracle Text to implement fuzzy searching using
the @fuzzy helper function available in Oracle Utilities Application
Framework V4.2.0.0.0 and above
1606764.1 Audit Vault Integration This whitepaper describes the integration with Oracle Audit Vault to
centralize and separate Audit information from Oracle Utilities
Application Framework products. Audit Vault integration is available
in Oracle Utilities Application Framework 4.2.0.1.0 and above only.
1643845.1 Private Cloud Planning Guide This whitepaper outlines how to the recommended architecture of
implementing Oracle Utilities applications on Oracle's Private Cloud.
1644914.1 Migrating XAI to IWS This whitepaper outlines the features of Inbound Web Services
(IWS) which replaces the XML Application Integration (XAI)
functionality. It covers how to configure and migrate from XAI to IWS.
1682436.1 ILM Planning Guide This whitepaper outlines the Information Lifecycle Management
based product data management solution to help minimize storage
costs for Oracle Utilities Application Framework based products.
1929040.1 ConfigTools Best Practices This whitepaper outlines techniques to implement customizations
using the ConfigTools functionality of Oracle Utilities Application
Framework.
2014161.1 Keystore Configuration This whitepaper outlines how to use the keystore functionality of the
Oracle Utilities Application Framework including processes for
changing key values and maintaining the keystore.
2014163.1 Oracle Functional/Load Testing Advanced Pack This whitepaper outlines the Oracle Application Testing Suite based
for Oracle Utilities Overview testing solution for Functional and Load Testing available for Oracle
Utilities Application Framework based products.
2132081.1 Migrating From On Premise To Oracle Platform This whitepaper outlines the process of moving an Oracle Utilities
As A Service product from on-premise to Oracle Cloud Platform As A Service
(PaaS).
2196486.1 Batch Scheduler Integration This whitepaper outlines the Oracle Utilities Application Framework
based integration with Oracle’s DBMS_SCHDEULER to build, manage
and execute complex batch schedules.
2211363.1 Enterprise Manager for Oracle Utilities This whitepaper outlines the process of converting service packs to
6 In Oracle Utilities Application Framework V4.2.0.1.0, Oracle Service Bus Adapters for Outbound Messages and Notification/Workflow are available
Whitepaper: Service Pack Compliance allow the Application Management Pack for Oracle Utilities to install
service packs using the patch management capabilities.
2214375.1 Web Services Best Practices This whitepaper outlines the best practices of the web services
capabilities available for integration.
This documentation is updated regularly with each release of product with new and improved information and
advice. Announcements of updates to whitepapers may be tracked via The Shorten Spot.
Luckily, there is a feature that allows custom environment variables settings and other commands to be run after the
splenviron.sh script (or splenviron.cmd on Windows) has been executed.
To do this create a cmenv.sh script (or cmenv.cmd on Windows) in the $SPLEBASE/scripts directory
%SPLEBASE%\
%SPLEBASE%
(%SPLEBASE% \scripts on Windows) with the commands you want to execute. For example, if an implementation
used AXIS2 jar files to call web services. Well you place the AXIS2 jar files in a central location (e.g. /axis/lib in
this example) and create the cmenv.sh/cmenv.cmd
cmenv.sh cmenv.cmd script with the lines:
export CLASSPATH=/axis/lib/axis.jar;$CLASSPATH
or
Additional to this, it is possible to do this WITHOUT adding the cmenv.sh script (or cmenv.cmd on Windows). Set the
CMENV environment variable to the location of a script, with the above commands contained, BEFORE running any
command and splenviron.sh script (or splenviron.cmd on Windows).
The CMENV facility is for global changes as it applies across all environments and the cmenv.sh/cmenv.cmd
cmenv.sh cmenv.cmd
solution is per environment. You can use both as CMENV is run first then cmenv.sh/cmenv.cmd
cmenv.sh cmenv.cmd.
cmenv.cmd
Note:
It is possible, using this technique, to manipulate any environment variable used by the product but this is
not recommended.
In Oracle Utilities Application Framework version 4.0.1 and above the authorization userid is
available as the CLIENT_IDENTIFIER on Oracle database sessions.
» Create a different userid for integration transactions. This allows tracking of integration within the architecture. It is
also possible to assign each transaction a different userid for integration, as it is passed as part of the transaction
but usually most customers consider this overkill.
» Create a different userid for each background interface. This allows security and traceability to be tracked at a
lower level.
» Create a generic userid for mainstream background processes. This allows tracking of online versus batch
initiation of processes (especially To Do, Case and Customer Contact processing).
Note:
Remember that any product user must be defined to the product as well as the authentication repository.
In Oracle Utilities Application Framework 4.2.x and above, there is a separation between authentication
and authorization identifiers.
The financial component of product already has a separate auditing facility, as all customers generally require it. Any
changes to financial information such as payments, adjustments, bills etc are registered in the Financial Transaction
tables. Therefore enabling auditing on those entities is not required and constitutes double auditing (i.e. auditing
information is stored in two places).
While the impact of the double auditing may be storage related, enabling auditing on bills, for example, can have a
performance hit on online bills. Customers with large numbers of bill segments per bill (i.e. several hundred) have
experienced negative performance impact during online billing when double auditing is enabled on financial entities.
This does not affect batch performance as auditing is not used in batch.
Having identical hardware allows for ease of stocking spare parts, better reproducibility of problems (both software
and hardware), and reduces the per platform testing cost. This cost, in many cases, will surpass the savings from
reusing existing disparate hardware.
Most hardware vendors have recommendations on optimal time intervals to restart machines. Some vendors
strongly encourage the issue for maintenance reasons. Check with your vendor for specifics for your platform.
Unless the outcome can be verified as correct, you should not use ANY direct SQL statement against product
database as you may corrupt the data and prevent the product from operating correctly.
All the data maintenance and data access in the product is located in the Maintenance Objects. The Maintenance
Objects validate ALL changes against your sites business rules and the rules built into the product. If you are using
the objects to manipulate the data then integrity is guaranteed as:
» All the validations including business rules, calculations and referential integrity are contained within the
Maintenance Objects.
» The Maintenance Object performs a commit when all validations are successful. If any validation is failed the
whole object is rolled back to a consistent state. In background processing, a commit is performed after a number
The system does not perform the processing necessary to build collapsed zones until a user expands the zone, so
configuring them as initially collapsed improves response times. This is especially relevant for the To Do zones that
may take a while if the number of To Do records is excessive.
Task Comments
Perform Backups Perform the backup of the database and file system using the site procedures and the tools
designated for your site.
Post Process Logs Check the log files for any error conditions that may need to be addressed. Refer to Post Process
Logs and Check Logs For Errors for more details.
Process Performance Data Collate and process day's performance data to assess against any Server Level targets. Identify
any badly performing transactions.
Perform Batch Schedule Execute the batch schedule agreed for your site. This will include overnight, daily, hourly and ad-
hoc background processes.
Rebuild Statistics Oracle recommend the database statistics for the product schemas to be rebuilt on a regular basis
so that the access to the SQL is optimized.
File Cleanup On a regular basis, the output files from the background processes and logs will need to be
archived and removed to minimize disk space usage.
Storage Manage Data not The Oracle Utilities Application Framework features can use Information Lifecycle Management to
required minimize storage. Refer to Information Lifecycle Management for more details.
Run Cleanup Batch Jobs There are a number of background processes that remove staging records that have been already
successfully processed. Refer to Removal of Staging Records for more details.
The figure below illustrates a simplified model of a typical customer business day:
Monitoring Monitoring
Batch Overnight Batch Daily/Ad-hoc/Hourly Batch Overnight Batch
0 4 8 12 16 20 0
Figure 4 – Example Typical Business Day
Note:
The above diagram is for illustrative purposes only and could vary for your site.
» There is a peak online period where the majority of call center business is performed. Typically this is performed
in business hours varying according to local custom.
» There is a call center off peak period where the volume of call center traffic is greatly reduced compared to the
peak period. Typically in call centers, which operate 24x7, this represents overnight and weekends. At this time
the call center is reduced in size (usually a skeleton shift). Some sites do not operate in non-peak periods and rely
on automated technology (e.g. IVR) to process transactions such as payments etc.
» Backups are either performed at the start of the peak period or the end of the peak period. The decision is based
upon risk around failure of the background processing and its risk to the impact of online processing. The product
specific background processes can be run anytime but avoiding them during peak time will maximize the available
computing resources to the successful processing of call center transactions. The backup at the end of the peak
period is the most common patterns amongst product customers.
» Background processes are run at both peak and off peak times. The majority of the background processing is
performed at off-peak times to maximize the computing resources to the successful completion of the background
processing. The background processing that is run at off peak times is usually to check ongoing call center
transactions for adherence to business rules and process interface transactions ready for overnight processing.
» Monitoring is performed throughout both peak and off peak times. The monitoring regime used may use manual
as well as automated tools and utilities to monitor compliance against agreed service levels. Any non-compliance
is tracked and resolved.
The definition of the business day for you site is crucial to schedule background processing and set monitoring
regimes appropriate for the traffic levels expected.
In the past releases of the Oracle Utilities Application Framework the userid that could be used to login was
restricted to 8 characters in length. In Oracle Utilities Application Framework V4 and above, it is possible to use a
user identifier of up to 256 characters in length.
In Oracle Utilities Application Framework V4 and above the concept of a Login Id is supported. This attribute is the
used by the framework to authenticate the user. For backward compatibility the 8 character userid field is still used
for auditing purposes internally. Therefore both Userid and Login Id should be populated. They can be different or
the same values.
Note:
The Login Id can be changed post creating the user identity to support name change, acquisitions etc.
The short User Id is not changeable as records in the product already use this value.
The Login Id can be set manually, via Oracle Identity Manager or set in a class extension to auto generate a value.
Figure 5 – Login Id
The product can be run on various combinations of hardware based architectures. When choosing an architecture
that is best suited to a site there are a number of key factors that must be considered:
» Cost – When deciding a preferred architecture, the total cost of the machine(s) and infrastructure needs to be
taken into a consideration. This should the ongoing costs of maintenance as well as power costs.
» IT Maintenance Effort – When deciding a preferred architecture, the manual or automated effort in maintaining
the hardware in that architecture needs to be factored into the solution.
» Availability – One of the chief motivations for settling on a multi-machine architecture is requiring the architecture
to support high availability. When deciding a preferred architecture, the tolerance and cost of availability needs to
be factored into the solution.
This is chosen by customers who want to optimize the hardware for the particular tier (settings and size of machine)
and therefore separate the maintenance efforts for each server. For example, Database Administrators need only
access the Database Server to perform their duties and set the operating system parameters optimized for the
database.
Unfortunately the solution can have a higher cost than the single server solution and still does not address the
unavailability of any machine in the architecture. Customers that have used this model adopt a similar solution to the
single server architecture (duplicate secondary machines at a secondary site) but also have the option of having
both machines in the architecture being the same size and shifting the roles when availability is compromised. For
example, if the database server fails, the Web Application Server can be configured to act as a combination of the
Database Server and Web Application Server.
Machines in this architecture can be the same size or different sizes depending on the cost/benefits of the various
variations. Typically customers use a smaller machine for the Web Application Server as compared with the
database server.
The Web Applications Servers are either clustered or managed. Refer to the discussion in the Clustering or
Managed? section of this document for advice.
This architecture is quite common as it represents flexibility as one of the Web Application Servers can be dedicated
to batch processing in non-business hours making the architecture more cost effective. Typically the Web
Application Server software is shutdown to allow batch processing to use the full resources of the machine while
allowing users (usually a small subset) to process online transactions.
The only drawbacks with this solution are a potential higher cost than a multi-tier solution and the potential impact of
database unavailability. Customers that use this architecture overcome the potential unavailability of the database
by either using a secondary site to act as the failover or using one of the Web Applications in a failover database
server role. The latter is less common, as most customers find it more complex to configure, but is possible with this
is a possibility with this architecture.
Machines in this architecture can be the same size or different sizes depending on the cost/benefits of the various
variations. Typically customers use a smaller machine for the Web Application Server as compared with the
database server.
The Oracle Utilities product architecture supports failover at all tiers of the architecture, using either hardware or
software based solutions. Failover solutions can be varied but a few principles have been adopted successfully by
existing customers:
» Failover solutions that are automated are preferable to manual intervention. Depending on the hardware
architecture used the failover capability can be automated.
» Availability goals play a big part in the extent of a failover solution. Sites with high availability targets tend to favor
more expensive, comprehensive hardware and software solutions. Sites with lower availability (or no goals) tend
to use manual processes to handle failures.
» Failover is built into the software used by the products (though it may entail an additional license from the relevant
vendor). For example, Web Application Server vendors have inbuilt failover capabilities including load balancing,
which is popular with customers.
» Hardware vendors will have failover capabilities at the hardware or operating system level. In some cases, it is an
option offered as part of the hardware. Sites use the hardware solution in combination with a software based
solution to offer protection at the hardware level. In this case, the hardware solution will detect the failure of the
hardware and work in conjunction with the software solution to route the traffic around the unavailable component.
» Failover is made easier to implement for the product as the Web Application is stateless. The users only need
connection to the server while they are actively sending or receiving data from the server. While they are inputting
data and talking on the phone they are not consuming resources on the machine. For each transaction the
infrastructure routes the calls across the active components of the architecture.
» At the database level the common failover facility used is the facility provided by the database vendor. For
example, Oracle database customers typically implement RAC. Failover configuration at the database is the least
used by existing sites, as the cost of having additional hardware is usually prohibitive (or at least not cost
configurable).
When designing a failover solution then the following considerations are important:
» Determine what the availability goals are for your site.
» Determine the inbuilt failover capabilities of the hardware and software that your site is using. This may reduce
the cost of implementing a failover solution if it is already in place.
» List all the components that need to be covered by a failover solution. Review the list to ensure all aspects of
"what can fail?" are covered.
» Design your failover solution with all the above information in mind that you can automate (within reason) for your
site. Ensure the solution is simple and reuses already available infrastructure to save costs.
Commonly sites use the following failover techniques in the architecture:
Network Load Balancer (hardware for large numbers of users; software based for others). Consider
redundant load balancers for "no single point of failure" requirements.
Web Application Server/ Business Use inbuilt clustering/failover facilities unless load balancer is doing this. Consider hardware
Application Server solutions for batch or interface servers.
Database Server Use inbuilt failover facilities in database unless hardware solution is more cost effective.
spl_service.log Business Application Server log. In some versions of the Oracle Utilities Application Framework this
log does not exist as it is included in the spl_web.log.
spl_web.log Errors in here can be service or database
related.
7 The theory is that the first place the error occurs is the most likely candidate tier.
spl_xai.log/spl_iws.log
spl_xai.log/spl_iws.log Web Services Integration also known as XML Application Integration (XAI) or Inbound Web
Services log. This log file is exclusively used for the XAI Servlet. More detail can exist in the
xai.trc file if tracing is enabled.
spl_web.log Web Application Server log. This is typically where errors from the browser interface are logged. If
errors are repeated from the spl_service.log then the issue is not in the Web Application
Server software but in the Business Application or below.
Note:
There are other logs that are related to the J2EE Web Application Server used that exist in this directory
or under the location specified in the J2EE Web Application Server.
» First error message is usually the right one – When an error occurs in the product, it can cause other errors.
Usually the first occurrence of any error is usually the root cause. This is more apparent when a low level error
occurs which ripples across other processes. For example, if the database credentials are incorrect then the first
error will be that the product cannot connect to the database but other errors in the product will appear as meta
data cannot be loaded into various components. In this case fixing the database error will correct the other errors
as well.
» Not all errors are in fact errors – The product will issue errors if components are missing but are able to
overcome this issue. For example, if meta-data is missing the system may resort to using default values. In most
cases this means the product can operate without incident but the cause should be resolved to ensure correct
behavior.
Note:
In some versions, such errors are reported as a WARNING rather than an ERROR.
ERROR
» Tracing can help find the issue – The products includes trace facilities that can be enabled to help resolve the
error. This information is logged to the logs above (and other server logs) that can be used for diagnosis as well
as for support calls. Refer to Online and Batch tracing and Support Utilities for more information about these tools.
» There are usually a common set of candidates – When an error occurs there are a number of typical
candidates for causing issues:
» Running out of resources – The product uses the resources allocated to it that are available on the machine. If
some capacity is reached in the physical machine (memory or disk space are typical resource constraints) or
logical, via configuration, such as JVM memory allocations, then the product will report a resource issue. In some
cases, the product will directly report the problem in the logs but in some case it will be indirectly. For example, if
the disk space is limited then a log may not be written which can cause issues.
» Incorrect configuration – If the product configuration files or internal configuration are incorrect for any reason,
they can cause errors. A common example of this is passwords which either are wrong or have expired. File
paths are also typical settings to check.
» Missing metadata – The product is meta-data driven. If the metadata is incorrect or missing then the behavior of
the product may not be as expected. This can be hard to detect using the usual methods and typically requires
functionality testing rather than technical detective work.
» Out of date software – All the software used in the solution, whether part of the product or infrastructure, has
updates, patches and upgrades to contend with. Upgrading to the latest patch level typically can address most
issues.
Refer to the Performance Troubleshooting Guideline Series (Doc Id: 560382.1) whitepapers available from My
Oracle Support for more techniques and additional advice.
It is possible to set a date for testing purposes at the system level as well as at user level.
Note:
This facility is not recommended for use in production environments. It is only recommended in non-
production environments such as testing and development.
Note:
To avoid issues with data values, to enable this feature, after configuring at the system or user level the
setting spl.runtime.options.allowSystemDateOverride must be set to true in the
spl.properties file for online, business application server and/or batch.
System Wide
To set a specific date for an environment, for testing purposes, a Feature may be added using the Feature
Configuration menu option. The feature to use is a General System Configuration feature. You may create a
General System Configuration feature if one does not exist on your environment. The System Override Date option
may be added to this feature and the date specified in international ISO format (e.g. YYYY-
YYYY-MM-
MM-DD).
DD An example of
this feature is shown in the figure below.
Note:
Only one General System Configuration feature should exist per environment.
Once saved, this date will be used, across ALL users on that environment, instead of the date for online and Web
Service operations8. You may need to flush the online cache to reflect the change across the system. Refer to the
8 In batch there is a standard parameter for all jobs, the Batch Business Date, that performs the same function for that mode.
Note:
This feature was added in Oracle Utilities Application Framework V4.1 and consequently is only
available for that version and above.
Whilst the system override is not appropriate as some users require specific dates, especially for testing, then it is
also possible to override the system date per user. This will force the online to use the override value for the user, if
it exists, instead of any system override or system date.
To achieve this, an Override System Date characteristic can be added to the individual user record using the User
menu option as shown in the figure below. As with the System Override, the Characteristic value should be in ISO
YYYY-
format (e.g. YYYY MM-
-MM DD).
-DD
As with the System wide override, the user should refresh the cache to reflect the change. To reverse the
configuration, remove the characteristic from the user record.
Transaction Timeouts
By default, transactions are subject to time limits imposed at the infrastructure level (at a network or database level).
In most cases, sites do not impose any explicit time limits.
The idea of time limits on transactions is to catch any long running online or web service transaction from causing
inefficiencies in traffic volumes across your configuration. To use time limits effectively, the site would want to set
limits on a number of key common transactions to keep a cap on resource usage across the enterprise.
In Oracle Utilities Application Framework V4.1 and above, a set of optional configuration settings has been
added to allow sites to specify global time limits and transaction level time limits on individual calls within a
transaction.
Note:
For Oracle Utilities Application Framework V4.1 it is enabled using patch 10356853.
» Timeouts may be set globally or overridden on individual services, business objects, business services or service
scripts.
» Timeouts are tracked throughout the transaction execution but the timeout is explicitly checked prior to any
database access to ensure the timeout has not been reached. If the transaction has multiple database access
statements, the current cumulative transaction time is checked at each statement. If no database access is made
by the transaction then timeouts are not checked and therefore not enforced.
Configuration of timeouts
To configure the use of timeouts for online or Web Service traffic a number of configuration settings need to be
specified in configuration files within the Web Application Server and Business Application Server. The parameters
that control timeouts are as follows:
Setting Comment
ouaf.timeout.business_object.<bocode> bocode>
Maximum amount of time (in seconds) for business object <bocode > can
execute before timeout. This timeout will override
ouaf.timeout.business_object.default when executing this specific
business object. The values for <bocode>
bocode> may be any valid business object.
ouaf.timeout.business_object.default Maximum amount of time (in seconds) an invokeBO call can last. All queries
issues by the business object will have life time remaining time of execution of
this business object call. This is a general timeout and can be overridden for an
individual business object, if desired.
ouaf.timeout.business_service.<bscode> Maximum amount of time (in seconds) for business service <bscode> can
execute before timeout. This timeout will override
ouaf.timeout.business_service.default when executing this specific
business service. The values for <bscode> may be any valid business service.
ouaf.timeout.business_service.default Maximum amount of time (in seconds) an invokeBS can execute before
timeout. All queries issues by the business service will have life time remaining
time of execution of this business service call. This is a general timeout and can
be overridden for an individual business service, if desired.
ouaf.timeout.query.default
ouaf.timeout.query.default Maximum amount of time (in seconds) an individual query can run if it is not
restricted by a service or some other timeout. For instance, if the online
application is issuing a query, which is not a part of a service call, a script or a
Business Object read, the query will be affected by this timeout. Otherwise, the
timeout will be set to remaining time of a logical transaction it belongs to (service
call, script, Business Object execution).
ouaf.timeout.script.default Maximum amount of time (in seconds) a service script call can execute before
timeout. All queries issues by the script will have life time remaining time of
execution of this script call. This is a general timeout and can be overridden for
an individual service script, if desired.
ouaf.timeout.service.<service> Maximum amount of time (in seconds) for service <service> can execute
before timeout. This timeout will override ouaf.timeout.service.default
when executing this specific service. The values for <service> may be any
valid application service.
ouaf.timeout.service.default Maximum amount of time (in seconds) a service call can execute before timeout.
All queries issues by the service will have life time remaining time of execution
of this service call. This is a general timeout and can be overridden for an
individual service, if desired.
ouaf.timeout.service.default=300
For example, to specify values for logical transactions that can be overridden with values specific for a service, script
or business object operation:
ouaf.timeout.service.default=300
ouaf.timeout.service.CILTPD=600
In the example above a value specific for service CILTPD was specified.
To implement the transaction timeouts for your site then the following files need to be updated:
Specify the value of ouaf.timeout.query.default in the Web Application server spl.properties file:
$SPLEBASE/etc/conf/root/WEB-
$SPLEBASE/etc/conf/root/WEB-INF/classes/spl.properties (Linux/Unix)
or
%SPLEBASE%\
%SPLEBASE%\etc\
etc\conf
conf\
nf\root\
root\WEB-
WEB-INF\
INF\classes\
classes\spl.properties (Windows)
Specify the other timeout parameters in the Business Application Server spl.properties file:
$SPLEBASE/etc/conf/service/spl.properties (Linux/Unix)
or
%SPLEBASE%\
%SPLEBASE%\etc\
etc\conf\
conf\service\
service\spl.properties (Windows)
Note:
Changing this file manually may lose changes across upgrades. If the changes need to be preserved
across upgrades then it is recommended to implement a custom template for this file. Refer to the Server
Administration Guide supplied with your product for details of this process.
Implementing guidelines
Java performs garbage collection automatically when it detects a memory tolerance has been reached. This has the
advantage that it can help prevent out of memory conditions within the java virtual machine. Now I did not use the
word guarantee as if the java virtual machine is extremely active then garbage collection may not be able to find
enough to prevent an issue. This is usually a rare occurrence but can happen.
Whilst garbage collection is an advantage in terms of memory management it has a darker side. When garbage
collection is triggered, all activity within the java virtual machine freezes for the garbage collection to do its work
efficiently. In most cases, the amount of time it freezes is short but if the garbage collection activity is frequent then
the garbage collection time can impact performance of a java application.
Whilst the default garbage collection regime shipped with the versions of the java virtual machine are adequate for
most sites, it is possible to tweak the garbage collection tolerances and algorithms that are used to speed up the
garbage collection process or make sure that garbage collection is not called as often as it may be using the
defaults.
The key here is to ensure that the whilst you cannot avoid garbage collection happening you want to minimize its
impact by minimizing the frequency it occurs, and when it occurs, minimizing it's impact.
There are a few guidelines to consider when tuning java garbage collection for applications:
» Consider using Parallel Garbage Collection – In later versions of java, the ability to garbage collect using
multiple CPU's in parallel was introduced to minimize the time garbage collection was executing.
» Tweaking Garbage Collection tolerances – By default the java virtual machine has a specific set of tolerances
for initiating garbage collection. This may be able to be tweaked to decrease the frequency and duration of the
garbage collection. The java virtual machine documentation supplied by your virtual machine vendor will outline
the options to set this.
» Tweaking memory parameters – By default java allocates regions of memory within a java virtual machine to
manage the lifecycle of classes and objects. If a product is using more of one region over another, this can force
Security Configuration
One of the features of the product is the ability to configure the authentication component of security. As with other
J2EE based applications the product supports the standard set of settings and configurations inherent in the J2EE
standard. The web.xml file controls the behavior of the authentication method used in the login-
login-config section
of the configuration file.
There are a number of settings and they have additional configuration settings that must be adhered to:
Setting Comment
BASIC This setting uses the operating system login dialog as the product authentication dialog. The setting must also
indicate the realm-
realm-name used. This setting is useful for basic environments and also can be used by some Single
Sign On solutions that detect this setting.
FORM This is the default setting where the product (or implementation) supplies a JSP/HTML based login dialog (and
error dialog) in the form-
form-login-
login-config section. This is the most common option for the product. Implementers
can implement their own forms according to site standards if desired.
CLIENT-
CLIENT-CERT This is more advanced two way SSL based authentication. This is typically used for Single Sign On
implementations and additional settings are typically required, including setting up SSL, to implement secure
authentication using certificates. Refer to CLIENT-CERT support for more details.
For a more in-depth discussion of this topic refer to the Oracle WebLogic security documentation.
Typically most customers use the default FORM based login option.
10 A full list of templates is listed in the Server Administration and Batch Administration Guides for the product.
Note:
Oracle JRockit is only available on a subset of Oracle Utilities Application Framework platforms. Refer to
the JRockit OTN web site for more information.
Note:
These instructions are for Oracle Utilities Application Framework V4.0.x and Oracle Utilities Application
Framework V4.1.x only.
Note:
Oracle JRockit has been replaced by Oracle HotSpot JDK for Java 7 and above. Oracle JRockit is not
supported in Oracle Utilities Application Framework V4.2.x and above.
It is possible to use the Oracle JRockit for the product using the following configuration process:
» Install the latest version of JRockit JDK Real Time and optionally, JRockit Mission Control as per the JRockit
installation guide on the machine running the environment.
» Logon to the machine running the environment and execute the splenviron command to set the environment
variables.
» Shut down the environment to make the changes.
» Execute the configureEnv[.sh] –i option to set the installation options.
JAVA_HOME)
JAVA_HOME to the location of the JRockit installation.
» In option 1, change the Web Java Home Directory (JAVA_HOME
» Execute the initialSetup[.sh] to reflect the changes in the various files.
» If native installation is being used, the EAR files for SPLService and SPLWeb need to be redeployed.
» Change the startWebLogic[.sh] script (in embedded it is located in splapp;
splapp in native it located under the
domain location) to add the following line in red. For example:
set JAVA_OPTIONS=%JAVA_OPTIONS% -
Dweblogic.system.BootIdentityFile=…\splapp\security\boot.properties
Note:
The red line above is an example for testing purposes only. Alter the options depend on your security
requirements.
Oracle WebLogic supports an extensive JMX interface to expose runtime statistics. To enable this facility the
following configuration process should be performed:
» Enable the JMX Management Server in the Oracle WebLogic console at splapp Configuration General
Advanced Settings option. Enable both Compatibility Mbean Server Enabled and Management EJB Enabled (this
enables the legacy and new JMX interface). Save the changes Restart the server to reflect the change.
Note:
For Oracle Utilities Application Framework V4.2.0.0.0 and above , this facility is enabled by
default.
service:jmx:iiop://<host>:<port>/jndi/<mbeanserver>
where:
weblogic.management.mbeanservers.runtime
weblogic.management.mbeanservers.edit
weblogic.management.mbeanservers.domainruntime
Ensure that you execute the splenviron[.sh] utility to set the appropriate environment variables for the desired
environment.
Execute the following jconsole command to initiate the connection to the JMX Mbean server
Windows:
jconsole -J-
Djava.class.path=$JAVA_HOME/lib/jconsole.jar;$WL_HOME/server/lib/
wljmxclient.jar -J-
Djmx.remote.protocol.provider.pkgs=weblogic.management.remote
To connect to the JMX classes, use specify the Remote process URL from the previous steps (i.,e.
service:jmx:iiop...) using the credentials specified for the Oracle WebLogic console.
service:jmx:iiop
For example:
Refer to the Oracle WebLogic JMX MBean Reference for more information.
Note:
For backward compatibility purposes this setting is disabled.
Note:
If your site is using Data Sources, then this section is not applicable.
The JMX interface for the product can be extended even further by exposing the UCP connection pooling metrics to
track statistics for database connections. To implement this facility, the following process should be implemented:
Refer to the UCP Admin java documents for more information on the statistics tracked.
Note:
This facility is not appropriate for Web Services tracking.
Note:
JMX for online and batch MUST be enabled for this facility to work. Refer to the Server Administration
The Java Mission Control provides developers with low level diagnostics for Java programs. This facility is useful for
diagnosing performance and coding issues in the implementation process. It is possible to use this facility, with the
right version of Java, for online and batch tracking at the Java level. It also can be used with Oracle Enterprise
Manager to perform Live JVM Thread Analysis using the Oracle WebLogic Enterprise Edition Management Pack for
Oracle Enterprise Manager.
To use this facility there are two basic options11 that must be added to the command line for Oracle WebLogic and
the Oracle Coherence startup. These options are:
-XX:+UnlockCommercialFeatures -XX:+FlightRecorder
These flags can be added to the product configuration using any of the following techniques:
Online Embedded Add the above options to the Web Application Additional Options command line as documented
above using configureEnv[sh] -a in option 51.
Batch - Add the above options to the threadpool.*.be templates used by bedit using the
com.ouaf.batch.jvmoptions variable as outlined in the Server Administration Guide.
Once connected the Flight Recorder and Java Mission Control features are available via the JMX URls outlined in
the Server Administration Guide and Eclipse or Oracle Enterprise Manager.
Overload Protection
Note:
It is recommended to use the Oracle WebLogic console or WLST to maintain work managers or use the
overload protection features of Oracle WebLogic. Customers using embedded installations can set the
overload protection for the product server in the Installation of the default domain. Refer to the Installation
Guide supplied with the product for details.
By default in Oracle WebLogic the domain uses a global work manager to manage connections. The issue with the
default global work manager is that the setting is effectively unlimited connections. In non-production, this value is
never reached ordinarily but it can cause issues in Production platforms. If the global default is used then the server
may experience an out of memory condition before hitting the global connection limit. In Oracle Utilities Application
Framework implementations there are a number of ways of addressing this:
» Overload Protection – Oracle WebLogic contains an overload protection setting which tells the server what to do
in an overload situation. Typically there is a setting to handle out of memory conditions with two settings no-
no-
action (default) or system-
system-exit.
exit In a production, where high availability is typically configured, it is
11 There are other options that are available that can be used to further filter the result sets. Refer to Running Flight Recorder for options.
Note:
It is recommended not to use Execute Queue functionality with Oracle Utilities Application Framework as
that is designed for legacy support. Use of work managers is recommended as an alternative to the
Execute Queue functionality.
» Stuck Thread Handling – As part of the work manager definition it is also possible to specify server specific
stuck thread handling, which directs how WebLogic should handle stuck threads and the tolerances affecting the
condition. It is possible to reuse the server definitions of stuck thread tolerances (default), specify whether stuck
threads are ignored or specify work manager specific tolerances. For more information about stuck thread
handling using work managers refer to Using Work Managers to Optimize Scheduled Work.
Resource Management
By default, resources in the product are set to the default tolerances supplied with Oracle WebLogic and the Oracle
Database. Whilst these defaults, usually unlimited access, may be appropriate for non-production environments, it
may not be appropriate for production environments.
There are a few resource management capabilities that can be used by the product to set appropriate resource
limits:
» Work Manager Support – It is possible to setup Application Scoped Work Managers to specify constraints to
prevent out of memory or overload issues on product servers. These control client connections to the servers to
ensure optimal resource usage on the product servers. Refer to Overload Protection for more information about
this capability.
» Database Resource Plan Support – It is possible to set and manage database resources at various levels using
the Oracle Database Resource Manager. This allows finite control at the database level to resources. For more
information about resource management, refer to Using the Database Resource Manager to Manage Database
Server Resources (Doc Id: 2067783.1) available from My Oracle Support and Managing Resources with Oracle
Database Resource Manager.
» Transaction Timeout Support – The Oracle Utilities Application Framework has a facility that allows global and
service transaction timeouts to be set to help limit resource usage. These provide a method of policing
transactions to operate within acceptable tolerance. Refer to Enabling Service Timings for more information about
this facility.
Setting these tolerances to match your business performance expectations and/or service level agreements will
depend on the traffic usage experienced for your site. Using the available monitoring facilities can help determine
tolerances.
12 These are the only two constraint types supported in the current release. Theoretically Minimum Threads Constraint is also supported but tends not
to be used by the majority of implementations.
Data management techniques used with products varies according to the types of data stored within the product.
Products are typically is divided into a number of data types and each of these data types needs to be managed in
the database for a varying length of time as the product typically has different uses for them. In most products the
data types can be categorized as follows:
Configuration Data (a.k.a. Administration Data driving the configuration of the Maintained by a subset of individuals. Kept
data) product (e.g. Menus, rates, security, indefinitely and only represents small part of
reference data etc). any database.
Master Data Data pertaining to customers/taxpayers Maintained by end users. Kept indefinitely but
such as personal records, addresses, can be driven by government legislation such
account information, contracts, etc) as privacy laws or industry rules.
Transactional Data Day to day data relating to any interaction Data is still active is retained for operational
or activity against the Master Data (e.g. reasons. Historical data is deleted or archived
Bills, Cases, payments, contacts etc). according to business rules or government
legislation.
The table above illustrates the various differences between the types of data and their usual data retention rules.
During an implementation and post implementation, you must be aware of the data types and then plan the data
retention rules accordingly.
Note:
This section is an introduction only. Refer to the ILM Planning Guide (Doc Id: 1682436.1) available from
My Oracle Support.
One of the most used techniques of managing data is Information Lifecycle Management. The goal of Information
Lifecycle Management is to minimize the storage costs of holding data but still making sure that it is appropriately
accessible to the business for business processes.
The fundamental concept behind information lifecycle management is that transactional data has an implied lifecycle
where it goes through a number of stages:
Typically the customer's business practices that dictate the amount of historical data stored in the database at any
time. Therefore there are a number of key factors that govern data retention:
» Government legislation – Most countries have a legal requirement to have information available in a computer
system. Typically this requirement separates how much should be active and how much should be retained in a
passive medium (e.g. archive or available in a backup format).
» Business requirements - There is usually a business requirement to work on historical data. For example the
business may need to be able to process financial data over a number of years. This requirement typically
dictates the amount of historical data kept.
» Physical capacity of the hardware – At the end of the day any machine used for any software has a physical
limit. This limit is usually based upon business requirement and cost to the business.
» Table Identifiers – All tables in the Oracle Utilities Application Framework based products have identifiers (some
have multiple). The physical key size can be an indicator of the limit of the records that can be kept. It should be
noted that most of the Oracle Utilities Application Framework based products have designed their key sizes to
cover the majority of expected data cases in the field.
» Audit requirements – Typically, each site will have some sort of auditing function, within the company or an
independent auditing firm. This auditing capability that will expect a certain amount of historical data, directly or
indirectly in the product, to adequately operate an audit. This requirement is usually forgotten by most sites until
they need it. During an implementation, or soon after, the audit requirements should be clarified and factored into
any data retention policy.
It should be noted that the product's themselves do not impose any particular data retention policy.
Typically the status of a record in the staging tables used for interfaces becomes Complete then it becomes
redundant data. The data will be reflected in the main product tables and is not required in the staging tables
anymore. Removal of completed records, on a regular basis, can have storage benefit as well as performance
benefit.
It is assumed that completed staging records are no longer required, after a period of time, as the data they contain
has been reflected have been reflected in the main tables. There is no business reason to keep completed staging
records after they have been completed for long periods of time.
Regular cleanups of the staging tables to remove completed records will have great performance benefits on
interfaces. Successful sites run the provided purge jobs to improve performance and reduce disk space usage.
To decide when to run these purge jobs and what parameters to pass to them the following is recommended:
» Work out with the business at the site how long they wish to retain the number of completed records. You can
stress to them that NO important data is lost in purging completed records as their data is reflected in main tables.
Partitioning
One of the most popular data management techniques is the use of partitioning on tables. Partitioning enables
tables and indexes to be split into smaller, more manageable components.
Partitioning allows a table, index or index-organized table to be subdivided into smaller pieces. Each piece of
database object is called a partition. Each partition has its own name, and may optionally have its own storage
characteristics, such as having table compression enabled or being stored in different tablespaces. From the
perspective of a database administrator, a partitioned object has multiple pieces which can be managed either
collectively or individually. This gives the administrator considerably flexibility in managing partitioned objects.
However, from the perspective of the product, a partitioned table is identical to a non-partitioned table; no
modifications are necessary when accessing a partitioned table using SQL.
The key to success to partitioning is recognizing which tables are candidates for partitioning and what partitioning
scheme to use. Partitioning must be planned and designed into a database to ensure that the partitioning regime is
optimal for your products.
The ideal candidates for partitioning are large tables with a small number of indexes. The benefits of partitioning are
optimal for large tables rather than applying the principles across all tables. The minimal number of indexes is a
criterion to minimize the likelihood of crossing partition boundaries in SQL.
Once the number of partitions is chosen the next step is to decide which partition scheme you can use. Database
vendors have implemented numerous ways of dividing a table into partitions. Each of these schemes (and
sometimes combination) tells the database how to split the data into the various partitions as well as how to access
the partitions. The most common partitioning scheme used is known as range partitioning where a range of values
(index based) is used to designate the partition a record is placed within. Refer to the partitioning documentation
provided by your database vendor for details of all the different schemes that can be used to partition your table
data.
Table partitioning represents the easiest method of data management and is usually the first data management
technique used before other techniques are considered.
Compression
Note:
Database level compression varies from one database version to another. In some cases, it is included as
an optional component of the database and in other cases, it is a separate option that must be obtained
from Oracle.
A technique that is starting to emerge from the database vendors is compression of data. This can be done at a
database level (global) or a table level and typically requires no changes to a product to implement.
As the data is stored and retrieved it is compress and decompressed before passing back to the product. As far as
the product is concerned it is unaware that the data is compressed or not. This appeals to database administrators
as they can experiment with compression without the need to involve the product developers.
Database systems have not heavily utilized compression techniques on data stored in tables. One reason is that the
trade-off between time and space for compression is not always attractive for databases. A typical compression
technique may offer space savings, but only at a cost of much increased query time against the data. Furthermore,
many of the standard techniques do not even guarantee that data size does not increase after compression.
Over time, database vendors have addressed the trade-off by implementing unique compression techniques. It has
come to a stage where virtually no negative impact on the performance of queries against compressed data; in fact,
it may have a significant positive impact on queries accessing large amounts of data, as well as on data
management operations like backup and recovery. Each database vendor will supply guidelines to effectively use of
INSERT’s, UPDATE’s
compression to minimize any overhead for all SQL statements (including INSERT UPDATE etc) and which tables
are the best candidates for compression.
Database Clustering
One of the more advanced features that have emerged as a valid data management technique is the ability for
databases to be clustered. This is a relatively new technique for data management, as most people associate
clustering with availability rather than management of data volumes.
Experience within the industry has shown that using the clustering capabilities can also improve performance when
large amounts of data are involved. Logically clustering enables the database to access more power and spreading
the workload across machines.
This technique is applicable where the volume of the data is impacting database performance. One of the major
symptoms is CPU usage on database is consistently high, no matter what tuning is performed at the database and
product level. This implies that the database is CPU bound and while there may be an option to add more CPU’s to
the server, considering clustering the data becomes a viable alternative.
While implementing clustering has been made progressively easier with each release of the database management
system, implementing clustering must be planned using the guidelines outlined by the database vendor. Refer to the
documentation provided on clustering by your database vendor.
Typically a site will have a preferred regime and set of tools that is used to achieve a backup and recovery of all
systems that the site. When implementing product this regime and set of tools is typically reused to cater for the
products and business needs.
When considering a backup regime for product the following should be considered:
» There is nothing within product technically that warrants a particular approach to Backup and Recovery. Most
customers continue to use their existing approaches.
» There is nothing within product technically that warrants a particular backup and recovery tool. Most customers
use the native tools provide with their platforms, for cost savings, but some customer have purchased additional
infrastructure to take advantage of faster backups/recoveries or additional features provided by such tools.
» If your site does not have a backup regime already the following can be considered default industry practice:
» Use Hot Incremental backups on production during the business week to reduce outage times.
» Do a FULL backup (Hot or Cold) once a week at least to reduce recovery times.
» Verify backups after they are taken to reduce risk of delayed recoveries.
» On non-production, consider either the same regime as production or consider regular FULL backups at peak
periods in an implementation.
» At runtime the user can add the URL at first login using the browser compatibility mode settings. This is explained
in an article on the Microsoft site.
» If using Internet Explorer 11 then it is possible to set the compatibility from the menu as explained in an article on
the Microsoft site.
» If sites want to implement automatic group policies to define the product URL's using compatibility mode then
refer to the Enterprise Mode article on the Microsoft site.
In Oracle Utilities Application Framework V4.3.x and above, compatibility mode is no longer required.
Popup Blockers
The browser interface to the product uses popup windows for initial searches on some transactions. Commercial
and inbuilt pop blockers may interfere with the display of these windows. It is recommended to provide overrides for
these blockers for the relevant URL’s used for the environments used onsite.
The popup blocker may block the initial popup search windows on some transactions but may not affect subsequent
searches that are explicitly requested by the end user.
The Internet Explorer settings used must match the recommended settings as outlined in the product Installation
Guide, which includes:
» Internet Explorer cache settings should be set to Automatically NOT Every visit to EVERY page for production
use. Certain elements on the browser user interface pages are cached on the client for performance reasons.
Incorrect setting of the cache settings in Internet Explorer will increase bandwidth usage significantly and degrade
performance, as screen elements will be retrieved on each rather than from the cache. The correct setting is
shown below:
» Java script must be enabled. The product framework uses javascript to implement the browser user interface.
» HTTP 1.1 supports must be enabled. If you use a proxy to get to the server, then also check Use HTTP 1.1
through proxy connections.
It may be further optimal to investigate whether changing the settings can improve performance at your site
(particularly the number of network buffers used). Altering the settings may improve performance but also may
adversely affect performance (due to higher CPU usage). Typically the majority of customers use the default
settings provided by the manufacturer.
Network bandwidth
One of the most common questions asked about the product is the network footprint of the Oracle Utilities
Application Framework based product. This question is difficult to answer precisely for a number of reasons:
» The amount of data sent up and down the network is dependent on how much change is done by an individual
user at the front end of the product. Only the elements changed by the end user are transmitted back to the
server. The more the user changes the more the data is transmitted. Given the numerous possible permutations
and combinations for data changes at any given time, this can be hard to estimate.
» The Oracle Utilities Application Framework supports partial object faulting. This means the framework only sends
data to the client that is being displayed. In screen with more than one tab, the framework only sends the data for
the tabs that are accessed by the end user. This means only part of the overall object required by the screen.
Most users tend to operate on a small number of tabs but this can vary from transaction to transaction.
» All transmission between the client and server are compressed using HTTP 1.1 natively supported compression.
This can reduce the actual size of the data transmission considerably depending on the content of the changes.
Some customer sites have found that traffic that is not legitimate can adversely affect network performance. Traffic
that is considered not legitimate includes:
» Traffic generated from viruses and Trojans – There are a plentiful number of viruses and Trojans in the
general Internet network that can cause bandwidth issues. Most sites have regular virus protection to minimize
the impact to your network but not all. While it is not a requirement within product to have such protection, the
industry in general recognizes the need for such protection.
» Unauthorized large transfers – Large transfers of data can adversely affect performance as it can soak up
bandwidth if the transfer is not configured correctly. There have been instances of large FTP transfers slowing
down traffic on lower bandwidth networks.
Ensuring that only legitimate traffic is on a network can provide greater bandwidth for all applications (including
product) and improve consistency.
The latency assumption seems to be that data should be transmitted instantly between one point and another (that
is, with no delay at all). The contributors to network latency include:
» Propagation - This is simply the time it takes for a packet to travel between one place and another at the speed
of light.
» Transmission - The medium itself (whether optical fiber, wireless, or some other) introduces some delay. The
size of the packet introduces delay in a round trip since a larger packet will take longer to receive and return than
a short one.
» Router and other processing - Each gateway node takes time to examine and possibly change the header in a
packet (for example, changing the hop count in the time-to-live field). This is a common cause of network latency.
» Other computer and storage delays - Within networks at each end of the journey, a packet may be subject to
storage and hard disk access delays at intermediate devices such as switches and bridges.
The product infrastructure such as the J2EE Web Application Server and Java itself needs access to the hosts and
port named in the configuration. When specifying the host name in configuration files, ensure the host housing that
component can connect (directly or via name resolution) to the host specified (even it is the local host). The table
below outlines what each component needs access to:
14
Web Application Server (including Web Database Server
Services) and Business Application
Server
Ensure ports are available and unique for the host as defined in your firewall. Inability to connect to ports will result
in a failed startup.
If there are issues then consider using localhost as your hostname in the configuration for the relevant
components (mainly Web Application Server and Business Application Server). If using localhost consider
installing the loopback adapter for your operating system. Use of the loopback adapter and localhost is highly
recommended if using dynamic server addresses and/or dynamic server names (such in virtualization)15.
When using multiple network connections, ensure the product uses the correct network connection(s) to operate.
If using CLUSTERED mode in batch, ensure that the multicast protocol is enabled; the configured multicast address
and port are available through your firewall and networking configuration.
14 The database connection was used to load cache data quickly at startup. In Oracle Utilities Application Framework V4.1, this is not required as cache
loading is performed via the Business Application Server (via Patch 11900153 ).
15 This is recommended for most Oracle products.
The log is generated in W3C common log format and can be analyzed by third party log analyzers for further
analysis. A full description of the log, it usefulness and the log analyzers that can read the log are documented in the
whitepaper Performance Troubleshooting Guideline Series (Doc Id: 560382.1) whitepapers available from My
Oracle Support.
» It is possible to track errors and trends from the log using the log analyzers.
» It is possible to parse the log at a low level and determine the number of concurrent users and the users that have
used the system (and interestingly conversely who has NOT used the system).
» It is possible to track flows of individual sessions, known as click streaming, to track the screens and data used for
the screens.
» It is possible to determine the criteria used by users for searches. This is useful for detecting wildcard searching.
This log is useful but it is large so needs to be managed as suggested in Backup of Logs.
Note:
In Oracle Utilities Application Framework V4.0 and above the JVM Options can be configured using
parameters. Refer to the Server Administration Guide provided with your product for more details.
» Creating additional servers within the instance to cater for the load.
Customers implement the latter suggestion in the following ways:
» Oracle WebLogic – A server entry for each new server is setup in the same Oracle WebLogic instance. The port
number can be the same (if the server is housed on a separate machine, known as clustering) or a different port
number (i.e. managed servers). A proxy is required to have a common connection point and to implement load
balancing. The memory footprint will be the same size for each server.
» IBM WebSphere – A new server is created within the WebSphere instance. The port number can be the same (if
the server is housed on a separate machine, known as clustering) or a different port number (i.e. managed
servers). A proxy is required to have a common connection point and to implement load balancing. The memory
footprint can be different for each server as that is held against the server entry within WebSphere.
Refer to Production Environment Configuration Guidelines (Doc Id: 1068958.1) whitepaper available from My Oracle
Support for more guidelines for production systems for JVM memory settings.
Most customers change the debug setting to false to disable global debug information. It is possible to debug
individual transactions using the interactive debug facility.
Note:
This requires the Application Descriptors for all applications to be updated.
Load balancers
Oracle Utilities product customers who have more than one Application Server (physical or logical) must use a load
balancer to route the traffic evenly across the available servers. This load balancer can be either software based,
such as a web server with the appropriate plugin from the Application Server vendor, or a hardware based load
balancer (such as BigIp or other Layer 7 switches). Experience has shown that customers with a large number of
users (typically greater than 1500) tend to use hardware load balancers and smaller customers use software based
load balancers.
Using load balancers with product may not guarantee that load is evenly distributed, as the transactions do not have
a consistent resource load factor. The resource load factor for any product depends on the transaction type and the
data used in that transaction. For example, Search transactions are different from maintenance transactions and
resource usage of any search is dependent on the criteria used. Two executions of the same search will have
different response and resource usage profiles. Factored on top of that is the fact that the load on a server is a
summation of the all the transactions sent to it and that transactions vary from second to second, minute to minute,
hour to hour etc. The best you can do is
When installing a load balancer there are a number of algorithms for load balancing offered:
Round Robin Traffic is routed to each server on a rotating basis. This is the most common used by implementations
and the recommended setting.
Random Traffic is routed randomly to the servers. Not commonly used but may be used if traffic is
random enough.
Weighted Round-Robin Variation on Round Robin but allows support for Not generally used by implementations.
Allocation clusters where all servers are not the same size.
IP Address Traffic is routed using client IP address as the Has been used by customers but found that has
identifier where servers are assigned IP address limitations if used with virtual servers such as
ranges. Terminal services or Citrix.
Load Load factors of transactions are measured and Not used with product, as most load factors are
used to determine which server is best suited. inconsistent across transaction invocations.
Typically most customers use Round Robin as it is simple and given load is unpredictable can yield the best results.
Most customers understand that on some periods of time the load will not be balanced but on average the load is
relatively balanced. Remember that each transaction time is a function of how much data is changed
Preload or Not?
One of the startup features of product V1.5, and above, is the preloading of pages to save time. This preloading
process dynamically rebuilds the screen definitions from the XML meta data on startup. While this setting (by
default) enables the startup to pre-build them (instead of on first invocation) the startup of the Web Application
Server is delayed while the preload process is executing. The startup of the server is delayed until the last of the
screens is preloaded.
While the preloading of individual screens is very quick (measured in milliseconds) building all screens (1000+) can
cause significant delays to initial availability AFTER a restart. It is possible to influence the amount of preloading
using two parameters in the Web Application Descriptor called:
» preloadAllPages – This parameter affects how much preloading is taking place if it is preloaded. A value of
true preloads every screen for product. A value of false preloads screens off the Main menu only (the screens the
end users will be using).
disablePreload
» disabl ePreload – This parameter controls whether preload is performed or not at all. This parameter
affectively overrides the preloadAllPages parameter.
The effect of changing the parameters is outlined in the following table:
preloadAllPages disablePreload
disablePreload Effect
true true Pages are not preloaded at all. First invocation of the screen by the first user in
that screen loads the screen for all users. Can cause slight delay in initial screen
load for a single user but application startup is quicker
true false All pages are preloaded including administration and utilities menu. This setting is
not recommended for production as it delays Web Application Server startup
unnecessarily.
false True Pages are not preloaded at all. First invocation of the screen by the first user in
that screen loads the screen for all users. Can cause slight delay in initial screen
load for a single initial user but application startup is quicker.
false false Default. Pages on the Main menu are preloaded. This delays the startup of each
managed server but ensures screens are loaded quicker for ALL users.
Changing of this parameter affects availability rather than performance but should be considered if availability is
critical or you are not using all the screens in product.
It is recommended that the following settings be implemented if you do not use the entire product or you want
startup to be quicker:
preloadAllPages false
disablePreload true
The reason that sites use the native utilities is that operations staff are more familiar with the native utilities, offer
more options and typically have an number of interfaces (not just command line). The Oracle Utilities Application
Framework provided utilities utilize the native utilities but use a subset of options only.
If the native utilities are used then the spl[.sh] utility should only be used to start and stop non-Web Application
Server components.
If the implementation uses multiple servers then a proxy is needed to group the servers into a cluster or managed
configuration for load balancing purposes. There are two alternatives for such a proxy:
» Software – Each of the Web Application Server's supported by product provide a plugin to use a HTTP server
such as Oracle Traffic Director, Apache, Oracle HTTP server, Netscape or IIS as a proxy. Typically the plugin is
installed within the HTTP server and configured to define the server address and scheme of load balancing.
» Hardware – Increasingly the network router manufacturers are making hardware products that act as network
proxies or load balancers (known as Layer 7 load balancers). Hardware such as BigIp, WebSwitch, NetScaler etc
are increasingly performing load balancing within intelligent hardware. In this case, you simply configure the
servers and ports to a virtual address in the hardware and the load balancing scheme to use.
Customers with multiple servers are either using a hardware or software proxy. The larger scale customers favoring
hardware based solutions. The only thing to remember with a proxy is to make sure the following are taken into
account:
» The proxy server must support the IE caching scheme and not disable it or adversely affect its operation. This will
increase network through put.
» The proxy server must support session cookies. It must be configured to support the passing and processing of
session cookies as they are used for security tokens in product. Failure of this point will result in the security
dialog being displayed before EVERY screen.
Tests and experience has shown that the Java Virtual Machine has an internal limitation on the number of threads
that can be safely supported for transactions. This is not a sever limitation but represents the number of active
transactions (i.e. Users) that are supported on a Web Application Server at any time.
The easiest method for finally determining the number of instances this will become is to divide the number of users
expected on the system, at worst case, by 300 and then round up to the next integer. For example, to support 750
users then you can specify 3 instances, to support 500 then you specify 2 instances etc. This method assumes
worst case. Regular monitoring of the actual number of connections will reveal whether this needs to be altered.
The thread pool manages the number of active connections to the Web Server (see figure below). A pool is used as
it saves resources by allowing reuse of connection threads instead of constantly creating and destroying threads.
Note:
For newer versions of Oracle WebLogic the thread pool is automatically managed by the Web Application
Server itself so the settings explained in this section may not apply. If you choose to manually manage the
connections in Oracle WebLogic then the advice does apply.
The number of connections allocated in the pool is not the same as the number of users logged on. As product is a
stateless application the thread pool represents the number of users actually hitting the web server, not idle users.
Idle users in a stateless application consume little or no resources (actually the only resource an inactive user holds
is an open socket to the web server).
Therefore the size of the thread pool at any time is the number of ACTIVE users using the product. This is the peak
concurrent users from any channel. For the product, the number of users for the Web Server is dictated by this
formula
Web Services threads should be treated as users as well. This is because they typically share the same thread pool.
Thread pools are not static in size, they can grow and shrink in size depending on the traffic volumes experienced.
For product, thread pools have three attributes that need to be considered for sizing:
» Minimum Size - This is the size of the thread pool at Web Application Server startup time and the absolute
minimum if the pool is shrunk due to inactivity. For product, this typically represents the typical load on the Web
Application Server. In other words, the typical number of active users, on the system at any time. Most customers
either use the typical load for the day period or the typical load for after business hours. The latter is used where
sites want to minimize the resource usage as the pool is directly related to the amount of memory used by the
Web Application Server. The higher the minimum, the higher the memory usage for the server (even at rest).
» Maximum Size - This is the maximum size the thread pool can grow to within the Web Application Server
responding to the peak load of the traffic. For the product this typically represents the peak load expected on the
largest amount of traffic expected at any point in time. You know those days. If the maximum is set too low for the
load then end users will experience delays even getting a connection to the Web Application Server. Again the
value here is also tied to the memory usage. The higher the value, the higher the memory footprint at peak.
» Inactivity Tolerance - This value (usually in seconds) is the amount of time that a thread is not allocated to a
user before it is destroyed. This value is to reduce the pool size when it has grown about the minimum to detect
when there is a drop in traffic. Each Web Application Server will have a default and even a different name for it.
Typically customers leave the default but it is worth noting to see if it needs changing in the future.
How do you work out the pool sizes? The product does not have a specific recommendation as it varies according to
the volume of transactions but the following has been observed at customer sites:
16 Oracle Service Bus is only applicable when using the Oracle Utilities adapters for Oracle Service Bus.
As for the maximum, the only advice that is applicable is that the value should NOT equal the number of users you
have defined to the system. The value will vary according to the expected peak traffic experienced at the site.
Customers have used between 33-70% of the number of defined users as the setting for the maximum pool size. To
determine the optimum value for your site, it may be necessary to use trial and error.
Note:
Setting the minimum and maximum to higher than normal values may waste memory resources on the
Web Application Server and may cause performance degradation.
Once you have set the settings in your configuration you will need to monitor it to see whether you need to adjust
the minimums and maximums.
Customers have determined their own rules of thumb and get to the sweet spot after a few weeks or months of
testing or production.
Each of the Web Application Server vendors has specific instructions for integrating LDAP but the same process is
followed:
» Determine LDAP Query - The LDAP query to find the users is required to be determined. Even though LDAP is
a standard protocol determined by the IETF the repository structure itself will vary from vendor to vendor and
even the same vendors repository structure will vary from customer to customer as it can be altered to suit the
business model. This is the hardest part of the process, as the query needs to be correct else it will not return the
right records. It is akin to submitting the wrong SQL statement. There are tools, like adfind (for Microsoft ADS
for example). to help you with this process.
» Define LDAP settings to Web Application Server - Input the query and credentials to access the LDAP
repository. This will vary between Web Application servers but basically you need to define the following:
» The location (host) of the LDAP server(s)
» The port numbers for the LDAP server(s) (usually 389)
» The credentials used to read the LDAP server(s) (userid/password)
» The LDAP query to get the users (and sometimes groups for some Web Application Servers).
» (Optional) Cache settings to save data retrieved from the LDAP server for performance reasons.
To make the best use of the AppViewer the following advice is offered:
» The AppViewer is provided blank intentionally. It must be primed using a predefined set of Batch jobs. This will
take data from the meta-data (including ANY customizations) and generate it. You will need to run the jobs
regularly if you update the meta-data regularly and want the information reflected in the Application Viewer.
F1-
F1-AVALG Generate AppViewer XML file(s) for Algorithm data (includes javadocs). This is code generation as well.
F1-
F1-AVBT Generate AppViewer XML file(s) for Batch Control. This is useful for run book information.
F1-
F1-AVMO Generate AppViewer XML file(s) for Maintenance Object data
F1-
F1-AVTBL Generate AppViewer XML file(s) for Table/Field data
F1-
F1-AVTD Generate AppViewer XML file(s) for To Do Type
» The introduction of the batch jobs, means you can decide which information is important for your site to display in
the AppViewer. For example, if you wish not to have To Do Types documented then you can omit that information
by not running that job. If you wish to populate ALL the information then you can use the genappvieweritems
command (or genappvieweritems.sh for UNIX).
Consider only populating the information in any design and development environments to save disk space. The
AppViewer can extend to a number of gigabytes if fully loaded.
In the past releases of the Oracle Utilities Application Framework prior to V4 this meant manually changing the
scripts provided as utilities, with the product, which can be overridden in upgrades. In Oracle Utilities Application
ANT_ADDITIONAL_OPT ANT Additional java options for the ANT make tool.
BATCH_MEMORY_OPT_MAXPERMSIZE Batch Maximum permanent generation size for Batch Threadpool workers.
WEB_ADDITIONAL_OPT Web/Business Additional java options for J2EE Web Application Server.
WEB_MEMORY_OPT_MAXPERMSIZE Web/Business Maximum permanent generation size for J2EE Web Application Server.
WEB_MEMORY_OPT_MIN
WEB_MEMORY_OPT_MIN Web/Business Minimum memory for J2EE Web Application Server.
The values for these settings will vary according to your site needs and the JVM vendor used at your site. The
following guidelines should be considered when changing these values:
» The additional java options supported by each JVM vendor is slightly different to take advantage of specific
platform requirements by the JVM. Refer to the JVM options documentation provided with your JVM. For
Oracle/Sun based JVM's refer to JVM HotSpot VM Options.
» Ensure any options specified are within the constraints and restrictions of the JVM. For example, setting invalid
values may result in failure or unexpected behavior.
» Do not specify the –Xms,
Xms -Xmx or –XX:Perm
XX:PermSize parameters as additional options as these are have dedicated
settings already.
The following common settings have been used by customers:
-XX:+UseGCOverheadLimit Use a policy that limits the proportion of the VM's time that is spent in
Garbage Collection before an OutOfMemory error is thrown.
-XX:+UseLargePages
XX:+UseLargePages Use large page memory. See Large Memory Pages for more details.
-XX:-
XX:-HeapDumpOnOutOfMemoryError Dump heap to file when java.lang.OutOfMemoryError is thrown.
Commonly used by Oracle Support if necessary.
-XX:+PrintGC
XX:+PrintGC Print message when garbage collection occurs
-XX:+FlightRecorder Allow Flight Recorder to be used on this JVM for Java Mission Control
Note:
The Production Environment Configuration Guidelines (Doc Id: 1068958.1) whitepaper available from My
Oracle Support contains advice for settings for all versions of the Oracle Utilities Application Framework
based products.
http://<host>:<port>/<server>/cis.jsp
<port> The port number allocated to WL_PORT at installation time. To avoid the port number a
value of 80 may be specified. This value can only be specified once per Web
Application Server machine.
Clustering or Managed?
One of the decisions that must be made, when dealing with multiple web application servers, is to whether the
servers will be clustered or managed. The attributes of each style are outlined below:
» Clustered – A cluster is a group of servers running a Web application server simultaneously, appearing to the
users as if it were a single server (usually managed by a separate administration server). The advantages of
using a cluster are that you can manage the servers as a group and also the servers communicate to each other
to monitor availability. Clusters can load balance within themselves as they are in constant communication with
each other. The disadvantages are that there is an overhead in communication (usually each server uses
multicast to communicate to the other servers in a cluster) and each server must use a different IP address and
port number. This means clusters can only operate on one machine per server. The figure below summarizes a
cluster:
» Managed – A managed set of Web Application servers that are independent of each other. They can be housed
on a single machine or multiple machines and can be housed on machines of differing size. The advantage of
managed servers is that each server can be targeted for specific user groups and can be managed
independently. There is no additional communication between the servers. A separate administration server can
manage the servers but that role can be taken by one of the managed servers if desired. The disadvantages are
that the load balancing software/hardware housed between the users and the managed servers performs the load
balancing and that deployment must be performed individually. The figure below summarizes managed servers:
There are no clear winners between clustering and managed Web Application Servers as the main factors in the
decision are:
» Amount of hardware – Clustering requires a hardware server per server . Sites where a small number of servers
are deployed cannot use clustering.
» Maintenance Effort – Clustering can reduce maintenance overhead if there are a large number of servers
involved. Managed servers require individual maintenance.
» Tolerance for multi-casting – Some sites ban multi-casting as it constitutes can be perceived as an
unacceptable overhead on the network. Deploying a private network between the servers can minimize this,
though this is more expensive.
» Flexibility – Many sites use managed due to its flexibility in routing particular traffic to particular servers. For
example, setting up specific servers for non-call center traffic (e.g. XAI, interfaces, depots).
Whether your site uses clustering or managed servers does not factor into high availability solutions as customers
have deployed high availability solutions using either technique.
Note:
To support clustering with embedded environmental settings the following guidelines are recommended:
» Host Name settings – In a clustered environment the hostname used for any configuration setting should be the
cluster host or the load balancing proxy used for the cluster. To access a cluster, the users (or servers) need to
access a single URL; the host component of that URL should be used for any host name configuration settings.
» Custom Context – In Oracle Utilities Application Framework V4 and above, it is possible to support a custom
URL context for use with the product at installation time. In a clustered environment, the context should be
common and therefore the setting of this value should be the same across all nodes of a cluster.
» Port Numbers – As part of the URL used for the product, a port number can be explicitly used. In most sites, Port
80 is used for production as it does not need to be specified on the URL by users. In a clustered environment this
port should be common and therefore the setting of this value should be the same across all nodes of a cluster.
Most J2EE Application Server vendors insist that all nodes of a cluster have the same port number (but different
hostnames).
» File Locations - The product requires some knowledge of where environmental specific information is stored.
This information is then configured to inform the product where specific configuration files and important
directories are located. Installing the software in a common location or on the same location on each node can
help allow the file locations to support clustering.
The following table outlines all the port numbers required by product at installation time:
BATCH_RMI_PORT I Default JMX Port for managing and monitoring Batch threadpool
BSN_JMX_RMI_PORT_PERFORMANCE I Default JMX port used for Business App Server Monitoring
BSN_RMIPORT I JVM Child process starting Port Number (COBOL products only)
COHERENCE_CLUSTER_PORT P Port used for Coherence Cluster (Multi-cast only). May be overridden in
configuration for Unicast.
WEB_JMX_RMI_PORT_PERFORMANCE I Default JMX port used for Web App Server Monitoring
WEB_WLPORT P Oracle WebLogic Web Server Port for online channel (HTTP)
WEB_WLSSLPORT P Oracle WebLogic Web Server secure Port for online channel (HTTPS)
Legend: P – Port allocated prior to installation of product, I – Port allocated during installation of product.
Prior to installation of product, the database and Web Application Server need to be installed and the ports allocated
to these components recorded and provided for the installation of the product (they are indicated with a P in the
table). Each vendor will have the port definitions stored in different places. Refer to the vendor documentation for
more information.
When allocating ports (indicated with an I in the table) during the installation the following advice may be useful:
» Pick the same port numbering scheme per environment to save time allocating ports. Some sites find using the
same last digits for the type of port is helpful. For example, having 4 allocated for BSN_RMIPORT
BSN_RMIPORT (6504, 7914,
9724, 22034… etc).
» BSN_RMIPORT denotes starting ports. The number indicates the start of the port range. The JVMCOUNT determine
the ports allocated. Ensure that there are free ports in the range starting from that port number.
Note:
BSN_RMIPORT
BSN_RMIPORT and JVMCOUNT only applies to products using COBOL support. These values are not
supported in Oracle Utilities Application Framework V4.3.x and above.
» Document the ports used in your documentation or services file for future reference.
» Do not allocated used ports as there will be port conflicts and potentially the applications will refuse to work.
Setting Contents
JMX Enablement System Userid Userid used for logging onto JMX Mbeans
JMX Enablement System Password Password to be used for JMX Enablement System Userid
RMI Port for JMX Web Port number to allocate to the JMX for the Web Application Server
spl.runtime.management.rmi.port=..
spl.runtime.management.connector.url.default=service:jmx:rmi:///jndi/rm
i://hostname:../oracle/ouaf/webAppConnector
jmx.remote.x.password.file=scripts/ouaf.jmx.password.file
jmx.remote.x.access.file=scripts/ouaf.jmx.access.file
ouaf.jmx.com.splwg.base.web.mbeans.FlushBean=enabled
The following settings are important to the JMX monitor:
» The spl.runtime.management.connector.url.default
spl.runtime.management.connector.url.default is the JMX url to be used in the JMX console or
JMX browser.
» The jmx.remote.x.password.file and jmx.remote.x.access.file are the default security setup for
the JMX. These are for basic security setup. For more information about the files and alternative security setups
refer to Monitoring and Management Using JMX Technology.
» The ouaf.jmx.* settings enable individual beans at startup time. These may be enabled at runtime.
Once the Web Application Server component is started; the JMX Mbeans defined in this configuration are started
and a JSR160 compliant JMX console or JMX browser can be used to connect to the JMX Mbeans. The remote
URL and credentials are provided as configured above.
Within the JMX console or JMX browser there are a number of specific facilities that are available:
» It is possible to manage the data within the Web Application Server cache from JMX. In past releases of Oracle
Utilities Application Framework this was possible using utility URLS's which required the IT group to logon to the
product to issue commands. This is still possible but can be replaced with JMX console commands. This is
controlled by the FlushBean Mbean.
» It is possible to get environmental information about the Web Application Server Java Virtual Machine (JVM) for
support purposes. . In past releases of Oracle Utilities Application Framework this was possible using utility
URLS's which required the IT group to logon to the product to issue commands. This is still possible but can be
replaced with JMX console commands. This is controlled by the JVMInfo Mbean.
» It is possible to get internal JVM information about the Web Application Server using the JVMSystem Mbean.
Mbean
This is an extension of the base Java MXBeans (Package java.lang.management). By default these are disabled
and can be seen by executing the enableJVMSystemBeans operation from the BaseMasterBean.
BaseMasterBean When
enabled the following additional areas can be monitored via JMX for the Web Application Server:
» Class Loading statistics
» Memory statistics
» Operating System statistics (statistics vary by platform).
» JVM Runtime information (additional to JVMInfo)
JVMInfo
» Thread statistics – Statistics on individual java threads.
Note:
No confirmation (i.e. Are You Sure?) dialog is provided with most JMX consoles or JMX browser so care
should be taken when issuing commands.
<internal-
<internal-apps-
apps-deploy-
deploy-on-
on-demand-
demand-enabled>false
enabled>false</internal
false</internal-
</internal-apps-
apps-deploy-
deploy-on-
on-demand-
demand-enabled>
The issue becomes then if the infrastructure provides such an interface for the product to hook into. There are a
number of patterns in this area:
» Customers implement an identity management solution to manage the passwords, expiry and rules. In this case
the implementation needs to interface to the identity management solution by calling the appropriate facilities in
the identity management solution around passwords. Of course, the J2EE Web Application Server used is then
interfaced into the identity management solution or the related security store to provide the authentication
mechanism.
» Customers link the security store for authentication directly to the security configuration of the J2EE Web
Application Server. In this case, the J2EE Web Application Server provides the interface to the password change
facility.
In the latter case, if you are a customer using Oracle WebLogic, there is an example JSP available under Password
Change Sample to allow an application to change the passwords, irrespective of the security used. This example
can be altered to suit your sites standards and linked to the product as a custom JSP via a Navigation key to link to
the appropriate menu.
…
Could not find the main class: weblogic.security.Encrypt. Program will
exit.
To fix this issue set the WEB_SERVER_HOME using the configureEnv[.sh] –i utility (or set WL_HOME)
WL_HOME to access
the appropriate security encryption classes.
Corrupted SPLApp.war
By default, the product installer uses archive mode for the product deployment (this is true for Oracle WebLogic and
IBM WebSphere – though in Oracle WebLogic expanded mode is also supported). When using archive mode the
product utilities build the product into a set of J2EE WAR and EAR files prior to deployment.
The WAR and EAR build is performed by the initialSetup[.sh] utility. Refer to the Server Administration
Guides for the product for a detailed description of the options and operations supported by this utility.
If, for any reason, the WAR or EAR files are not built completely, and are therefore are corrupted, then the product
start may abort. This can manifest in a number of error messages depending on the nature of the corruption:
The table below outlines the default set of J2EE Web Application Server log files:
Server Log Server Messages. Log exists per server defined in domain. ■
Detailed error messages and product information is
contained in this log.
HTTP Access Log (optional) HTTP resource log (aka Apache Log) ■
spl_web)
spl_web
Web Application Log (spl_web Captures product web application messages ■
initialSetup)
initialSetup
Initial Setup (initialSetup Captures configuration generation utility messages ■
configureEnv)
configureEnv
Configuration (configureEnv Captures configuration utility messages ■
Refer to the Oracle WebLogic documentation and Server Administration Guide for details of the logs, location and
format.
Component Setting
Batch BATCH_MEMORY_ADDITIONAL_OPT
You specify the values as you would on the java command line as outlined by your Java vendor. For example, for
Oracle WebLogic/Oracle Java customers:
» It is possible to enable java debug (using jdb)
jdb to debug your java code using the –Xrunjdwp option.
» It is possible to enable verbose class loading into the log files using the –verbose java option.
Note:
The combination of java options that can be used must be valid for the JVM version and vendor used.
Note:
For customers on previous versions of the Oracle Utilities Application Framework, these setting must be
manually set in the scripts used to initiate the JVM. Please note, changes to any base scripts may be
overridden when initialSetup[.sh] is executed.
In Oracle WebLogic 12c there is a feature where if the script setUserOverrides.sh exists in
$DOMAIN_HOME/bin then this script will be called at startup by nodeManager or the Administration server at
startup time. This script is user defined and is useful for the following situations:
» The SPLEBASE setting can be set in this script to implement native mode installation. This setting is used by the
Oracle Utilities Application Framework to allow configuration files to get used from the file system rather than
within the EAR/WAR file at runtime. For example:
SPLEBASE="/u01/utilities/product"
export SPLEBASE
» The java custom memory and additional parameters can be added to the server startup using the
USER_MEM_ARGS environment setting18. For example:
Note:
If there are multiple servers on this domain ensure the script takes this into account using the SERVERNAME
variable.
Advantages Disadvantages
18 If the server start parameters on Oracle WebLogic console are going to be used avoid setting this value.
Simple and easy to implement configuration, ideal for Changes to domain configuration within Oracle WebLogic console
development and other non-production environments must be reflected in configuration files using user exits or custom
templates to retain changes across patches/upgrades.
One Oracle WebLogic installation can be shared across many Does not support clustering without complex manual changes to
environments on the same host configuration files
Common configuration change scenarios are handled by OEM does not recognize Oracle WebLogic targets without manual
configuration settings configuration of discovery
This setup is ideal for development and other non-production environments where you need multiple copies of the
product on a single host but may not be appropriate for production environments where advanced security setup
and clustering are typically required.
The alternative is to install the product in what is termed, native mode. Typically Oracle WebLogic J2EE Web
Applications are deployed directly to Oracle WebLogic and managed that way. This has the advantage of gaining full
access to the Oracle WebLogic facilities like advanced configuration and more importantly the ability cluster the
product across multiple nodes. Oracle Utilities Application Framework V4.x and above, can be installed using this
mode with minor changes to the installation process. It is also possible to convert an embedded installation into a
native installation with minor changes, if migration to this mode is appropriate.
The native mode allows the product to have access to support using the features of Oracle WebLogic with fewer
configuration steps than embedded mode. The advantages and disadvantages of this mode are outlined in the table
below:
Advantages Disadvantages
Native Support for Clustering/Managed Servers Support for multiple environments per domain is limited at present
stage. Multiple WebLogic installs may be required if multiple
environments are on the same host
Changes to domain do not require manual changes to templates Requires some manual effort in setting up domain, servers and
security for environment
The figure below illustrates the differences between the two modes:
Native
• Console Based
Administration
Deployed • Standard Utilities
• Product J2EE files • Manual or Auto
OUAF Product Oracle WebLogic Deployment
• Cluster or Managed Server
Support
• Advanced Security
• Advanced Configuration
• Separate Administration
• Automatic OEM registration
The two modes have different attributes and different approaches applicable to different situations. The following
recommendations should be consider when deciding which mode to use:
» It is not impractical use different modes for different environments. One mode will not usually satisfy all the needs
of all environments.
» It is recommended to use native mode for production implementations as it offers flexibility, cluster support,
separation of the Administration function and the ability to use the advanced configuration elements of Oracle
WebLogic as well as Oracle Enterprise Manager (if applicable).
» It is recommended to use native mode if each environment is housed in a separate virtual machine, which is
common in virtualized implementations. This will allow configuration at the virtual machine level to be used and
reduces maintenance efforts.
» It is recommended to use embedded mode if more than one copy of the product exists on the same virtual or non-
virtual host. The ability to share a common copy of Oracle WebLogic is reduces the maintenance efforts for
multiple environments.
» It is recommended to use embedded mode for development environments where java based development is
taking place. This setup supports the use of the expanded mode features of Oracle WebLogic used by the Oracle
Utilities SDK, which requires access to expanded directories for multi-user development.
For more information about native mode installation refer to the Native Installation for Oracle Utilities Application
Framework (Doc Id: 1544969.1) and Implementing Oracle ExaLogic and/or Oracle WebLogic Clustering (Doc Id:
1334558.1) available from My Oracle Support.
CLIENT-CERT Support
Note:
In Oracle Utilities Application Framework V4.2.0.2.0 and above, CLIENT-
CLIENT-CERT is supported from the
configureEnv[.sh]
configureEnv [.sh] utility directly.
One of the additional configuration options for the authentication of the product is to implement a Single Sign On
solution or implement client certificates. Whilst most of the configuration for these features is performed in the Single
In most cases to use these facilities the login configuration for the product has to be changed from FORM or BASIC
CLIENT-
to CLIENT -CERT.
CERT This informs the product that the credentials will be passed directly from the J2EE Application
Server (via the Single Sign On solution, security providers or via client certificates).
» Logon to the machine that houses the environment to change as the product administrator.
» Take a copy of the web.xml.template to cm.web.xml.template in the same directory the original is
located in the templates subdirectory. This will inform the Oracle Utilities Application Framework to use this new
template instead of the base template.
» Edit the cm.web.xml.template file and replace the login-
login-config section with a section configuring the
CLIENT-CERT configuration. For example:
Replace:
<login-config>
<auth-method>@WEB_WLAUTHMETHOD@</auth-method>
<form-login-config>
<form-login-page>@WEB_FORM_LOGIN_PAGE@</form-login-page>
<form-error-page>@WEB_FORM_LOGIN_ERROR_PAGE@</form-error-page>
</form-login-config>
</login-config>
With:
<login-config>
<auth-method>CLIENT-CERT</auth-method>
</login-config>
Note:
For Oracle Utilities Application Framework V4.x customers this may need to be repeated for the templates
web.xml.appViewer.template) and online help (web.xml.help.template
web.xml.appViewer.template
for AppViewer (web.xml.appViewer.template web.xml.help.template)
web.xml.help.template if you
wish to include those components in the same solution.
Note:
As the web.xml file has been changed and EAR file rebuilt, customers using native mode will have to
redeploy the SPLWeb application to reflect the change.
» Optionally, changes can be verified by viewing the web.xml files generated under the etc\
etc\conf subdirectory of
the product installation.
» Restart the product.
It is possible to use Work Managers to constrain the traffic for online transactions and Web Services traffic for Oracle
Utilities Application Framework based products. Once a Work Manager is deployed Oracle WebLogic will track
traffic till the configured resource limit is reached. If the resource limit is reached, the server where the limit is
attached will refuse more traffic to assure existing traffic has enough resources to complete, till the limit is not
exceeded once more. To use Work Managers effectively, it is recommended that clustering or multiple managed
servers be used to maximize availability.
» If you are using Oracle WebLogic in native mode, perform the following:
» Define a Capacity Constraint19 for use the server. You can optionally deploy the constraint directly to the
product server definition or via a custom Work Manager for tracking.
» Define a Work Manager and associate the fore-mentioned Capacity Constraint with the Work Manager.
Deploy the Work Manager to the product server.
» If you are using Oracle WebLogic in embedded mode, the following steps should be implemented:
» If the product is using Oracle Utilities Application Framework V4.x and above, create a user exit file in the
templates directory with the name cm.config.xml.exit_2.include (or
cm.config.xml.win.exit_2.include for Windows) with the following contents and execute
initialSetup to include the changes in the configuration:
<self-tuning>
<capacity>
<name>RequestLimit</name>
<target>myserver</target>
<count>150</count>
</capacity>
<work-manager>
<name>MyWorkManager</name>
<target>myserver</target>
<capacity>RequestLimit</capacity>
<ignore-stuck-threads>false</ignore-stuck-threads>
</work-manager>
</self-tuning>
19 At the present time, the Capacity Constraint Work Manager Definition Type is the only supported constraint or request class.
Once implemented, Work Managers can be monitored from the Oracle WebLogic console.
By default, the Oracle Utilities applications are installed in embedded mode for Oracle WebLogic. Basically the
product reuses an existing Oracle WebLogic installation and points the WebLogic runtime installation to the Oracle
Utilities application runtime to run the product. It is called embedded as basically we are not using the Oracle
WebLogic installation to house the product, the product is using file embedded within the product to run Oracle
boot.properties, config.xml etc and command
WebLogic. For instance we generate the security setup, boot.properties
utilities to start/stop Oracle WebLogic and they are embedded within our product.
Whilst the embedded installation is ideal for most environments, as it is simple, it has a number of disadvantages:
» Advanced facilities such as clustering and high availability cannot be easily implemented in embedded mode.
» Most of the configuration is defaulted such as the domain name and server names.
» The administration server is automatically included in each environment.
» You need to use text file based user exits to augment the embedded configuration for advanced configurations.
This requires manual efforts to maintain XML files in some cases.
» To offer an alternative to the embedded installation, the ability to use a native installation method which houses
the product inside Oracle WebLogic was introduced. This allows the site to take full advantage of Oracle
WebLogic features and also manage the configuration from the Oracle WebLogic console or Oracle Enterprise
Manager. For details of the features of the Native installation refer to Native Installation for Oracle Utilities
Application Framework (Doc Id: 1544969.1) whitepapers available from My Oracle Support.
One of the interesting abilities that is possible when using native mode is that it is possible to run multiple products
or environment within the same domain. Basically this means you can reduce the number of administration consoles
to manage your environment.
» Ensure the deployment name is unique for every single deployment (even across products/environments).
For example, if you ran a TEST environment and UAT environment on the same domain. I setup
SPLServiceTEST and SPLWebTEST for TEST deployments and SPLServiceUAT and SPLWebUAT for my
UAT environment. These are just examples.
» Ensure the paths in the Server Setup for the individual servers point to the classes in the relevant
environment installations. Ensure the SPLEBASE is set correctly in the server setup.
» Ensure the port numbers allocated to the Servers match the port numbers you specified in the product
installation for each server.
» The most important part of this is that you must alter the setDomain utility within the domain to set the
SPLEBASE variable appropriately for each SERVERNAME.
SERVERNAME If you forget this, the product may not startup
correctly. In my example:
if [ $SERVERNAME$ = 'ouaf22server']
then
...
set SPLEBASE=/oracle/FW22
fi
» Deploy the deployments to the relevant server using the Oracle WebLogic console or WLST
WLST. To save time, deploy
the SPLServiceXXX deployment first and then the SPLWebXXX deployment as per the Native Installation Oracle
Utilities Application Framework (Doc Id: 1544969.1) whitepaper available from My Oracle Support.
» Start/stop the servers using the Administration console. Do not use spl[.sh]
spl[.sh] as you are operating in native
mode. All base Oracle WebLogic utilities can also be used such as WLST etc.
To ensure optimal use of the domain a few considerations should be taken into account:
» All servers on this domain share the same authentication security setup.
» By default, all the J2EE resources are controlled by a common role/credential (typically cisusers).
cisusers If you want to
separate the servers using different roles/credentials then you need to change the cisusers setting using the
configureEnv[.sh]
configureEnv[.sh] -a settings for the Web Security Role/Web Principal Name/Application Viewer Security
Role/Application Viewer Principal Name to an appropriate setting for each product/environment.
» When using native mode, any changes to the EAR files need redeployment (it is an update deployment which is
far quicker). You can use the autodeploy features of Oracle WebLogic to minimize this effort21. If you ever run
initialSetup[.sh]
initialSetup[.sh] an update redeployment is required.
21 Additional CPU usage is encountered when autodeployment is used as Oracle WebLogic regularly checks for updates.
Note:
Customers using Oracle Enterprise Manager to manage the products or Oracle WebLogic will not
necessarily need to use this facility as the Oracle Enterprise Manager already serves this process.
Cache Management
One of the features of the Oracle Utilities Application Framework is the implementation of a level 2 cache within the
architecture to provide performance benefits for commonly used configuration information. Generally the cache is
managed by the Oracle Utilities Application Framework automatically with little or no interaction from operators. By
default, the cache is reloaded as needed or every eight (8) hours, whichever occurs first. Some elements of the
cache such as security information is refreshed on a more frequent basis (every 30 minutes).
There are a number of cache management utilities to manually cause all or parts of the cache to refresh manually.
These utilities are documented in the Server Administration Guide for your product.
While these utilities are rarely used in production, they can be used, by appropriately authorized personnel to make
sure the cache contains the correct information. Typically the manual refresh is required if the configuration data is
changed and needs to be reflected as soon as possible.
Setting Contents
JMX Enablement System Userid Userid used for logging onto JMX Mbeans
JMX Enablement System Password Password to be used for JMX Enablement System Userid
RMI Port for JMX Business Port number to allocate to the JMX for the Business Application Server
Note:
No confirmation (i.e. Are You Sure?) dialog is provided with most JMX consoles or JMX browser so care
should be taken when issuing commands.
With the advent of Oracle Utilities Application Framework V2.x and the removal of Oracle Tuxedo from the
architecture meant that this information was not available for collection as easily as originally. In Oracle Utilities
» On the hour boundary the completeExecutionDump operation must be executed by your JMX console or JMX
browser to extract and save the CSV information to a file. The file should have the date and time of the collection
for reference reasons.
» After collection of the statistics has been completed, the reset operation should be executed from your JMX
console or JMX browser.
The information in the files can be collated according to the desired analysis required by your site to summarize the
information. The CSV can be loaded into a database for analysis or into your sites preferred spreadsheet or analysis
tool. Remember that the date and time of the collection is not recorded in the data only the data itself.
Note:
While this process can be manually done using a JMX console such as jconsole,
jconsole it is recommended
that the JMX console or JMX browser automate the collection of the process in the background. Refer to
the documentation of the JMX console and JMX browser to configure your console or browser to achieve
this.
The size of the pool can vary from mode of component to component with the following guidelines:
» The minimum pool size of the product should be set to the average number of connections needed for the mode
of access. By default it is set to one (1) which is sufficient for non-production, but for each new connection
required for the traffic the database connection needs to be established prior to use. The establishment of an
individual database connection can cause delays to the transaction using the connection as it waits for the
connection to be established. This negates the benefit of pooling connections. Track the number of connections
used at normal traffic load and specify that as the minimum. This will establish the connections at startup time and
avoid the overhead of creating connections on the fly. Ideally you want to avoid creating connections on the fly
unnecessarily.
» The maximum pool value should be set to cover any peak load you may experience. Initially the values can be
artificially inflated but after monitoring the number of connections open at peak times can optimize the value.
» The total number of database connections from all pools connecting to an individual database should not exceed
the number of configured users/connections for that database. Exceeding the number of configuration users can
cause database connection failures and delays in transactions.
Typically customers have indicated that a good rule of thumb to use is that at any time one third of the defined users
are active for normal traffic and two thirds are active at peak.
Note:
This is a rule of thumb and may NOT apply to the traffic patterns at your site. It is recommended to start
with an agreed value and then monitor to optimize the values as necessary.
Refer to the Server Administration Guide for your product for additional advice on this facility.
With the popularity of the Configuration Tools facility within the product for customer extension the increase load of
XPath may cause memory issues under particular user transaction conditions (in particular high volume patterns).
As with most technology in the Oracle Utilities Application Framework, the XPath statements used in the
Configuration Tools are cached for improved performance. Increased load on the cache may cause memory issues
at higher volumes.
To minimize this the Oracle Utilities Application Framework has introduced two new settings in the
spl.properties file for the Business Application Server, where the dimensions of the XPath statement cache are
defined. These settings allow the site to optimize the control the XPath cache to support caching of commonly used
XPath statements but allowing for optimal specification of the cache size (to help prevent memory issues).
com.oracle.XPath.LRUSize Maximum number of XPath queries to hold in cache across all threads. A zero (0 0)
value indicates no caching, minus one (- -1) value indicates unlimited or other
positive values indicate number of queries stored in cache. Cache is managed on
22
a Least Reused basis. For memory requirements, assume approximately 7k per
query). The default in the template is 2000 queries.
com.oracle.XPath.flushTimeout The time, in seconds, when the cache is automatically cleared. A zero (0) value
indicates never auto-flush cache and a positive value indicates the number of
seconds. The default in the template is 86400 seconds (24 hours).
Note:
The templates provided with the product have these settings commented out. To use the settings
uncomment the entries in the generated configuration files.
In most cases the defaults are sufficient but can be altered if the following is guidelines are:
» If there are memory issues (e.g. out of memory) then decreasing the LRUSize or decreasing the flushTimeout
may result in a reduction in memory issues. LRUSize has a greater impact on memory than flushTimeout.
flushTimeout
» If decreasing value the value of the LRUsize causes performance issues, consider changing the flushTimeout
initially only and ascertain if that works for your site.
There are no strict guidelines on the value for both parameters as cache performance is subject to the user traffic
profile and the amount and types of XPath queries executed. Experimentation will assist in determining the right mix
of both settings for your site.
If the Performance Statistics JMX call is not valid for your site it is also possible to configure log4j to display service
spl_service.log. This is possible by adding the following line to the
performance information in the spl_service.log
$SPLEBASE/splapp/businessapp/properties/log4j.properties or
$SPLEBASE/etc/conf/service/log4j.properties
$SPLEBASE/etc/conf/service/log4j.properties file by adding the following line:
log4j.logger.com.splwg.base.api.service.ServiceDispatcher=debug
Once this is set the debug messages will written to the $SPLEBASE/logs/system/spl_service.log in with the
message type api.service.ServiceDispatcher type with a Start and End message. Both the Start and End
message outline the service name called, execution mode and the End message includes the timing for the service
in ms. For example:
22 In laymans terms, older cached entries that are not reused are removed from the cache automatically to make roon for more used entries or new
entries.
..
Note:
For background on the JDBC Datasource Support within Oracle WebLogic, refer to WebLogic Server
Data Sources.
Note:
For tuning advice for JDBC Datasources refer to Tuning Data Source Connection Pools.
By default, the online (and XAI) components of the Oracle Utilities Application Framework use Universal Connection
Pool for connection pooling. If you wish to use Oracle WebLogic JDBC based connection pooling for integration to
other products or to use the GridLink features of Oracle WebLogic, then Oracle Utilities Application Framework
needs to be configured to use Oracle WebLogic Data Sources.
» Advanced Monitoring – The pool management capabilities of Oracle WebLogic includes a set of statistics that
are calculated for the pooling that can be tracked to ascertain the health of the pool. These statistics can be
obtained using JMX, via the Oracle WebLogic console or via Oracle Enterprise Manager. The figure below
illustrates the capability to display the statistics to the Oracle WebLogic console:
» GridLink Support – Using the Oracle WebLogic Data Sources allows the ability to use GridLink based data
sources that provides easier configuration for failover and RAC configuration. Refer to the Oracle WebLogic
GridLink documentation for a discussion of this feature set.
» Diagnostic analysis – The Oracle WebLogic Data Sources have advanced diagnostic capabilities to monitor and
detect database connectivity and for resource profiling. This information is available from the Oracle WebLogic
console, JMX and Oracle Enterprise Manager. Refer to the Monitoring documentation for Oracle WebLogic for a
discussion of the facilities.
The Oracle Utilities Application Framework can be configured to use Data Sources using the following process:
» Define the JDBC datasource to the product database, using the Oracle WebLogic console, with the following
attributes:
jdbc. For example,
» The JNDI Name for the datasource should contain a directory name such as jdbc
jdbc/demodb.
jdbc/demodb The name of the JNDI should reflect your specific requirements.
» Ensure that Global Transaction Support is disabled on the JDBC connection. This is not appropriate for this
integration.
» The XA JDBC driver may be used but the product does not take advantage of the XA feature set.
» The Instance or Service driver may be used but if there is no site preference use the Service driver.
» Unless otherwise stated, the Statement Cache Algorithm should be set to the default setting of LRU.
» The Database User and Password to use should be any database user with read/write access to the product
such as CISUSER or SPLUSER.
SPLUSER The value can correspond to the value of the DBUSER configuration variable
in the ENVIRON.INI,
ENVIRON.INI if no site preference exists.
Note:
For Oracle Utilities Application Framework 4.3 and above, this manual process described below is not
required. The installation asks for the JDBC_NAME as part of the installation which is the fully qualified
JNDI name for the data source.
<jdbc-system-resource>
<name>demo
demodb
demodb</name>
db
<target>myserver
myserver</target>
myserver
<descriptor-file-name>jdbc/
jdbc/demo
jdbc/demodb
demodb-
db-jdbc.xml</descriptor-file-name>
jdbc.xml
</jdbc-system-resource>
<internal-apps-deploy-on-demand-enabled>false
false</internal-apps-deploy-on-
false
demand-enabled>
Figure 24 – JDBC fragment for config.xml
Note:
name, target and descriptor-
The values for name descriptor-file-
file-name tags should be altered to suit your JDBC
internal-
connection. The internal apps-
-apps deploy-
-deploy on-
-on demand-
-demand -enabled is not required but is included for
reference purposes.
» For products using Oracle Utilities Application Framework V4.x then the above XML code should be placed
in the CM_config.xml.exit_3.include (or CM_config.xml.exit_3.include for Windows) in the
templates directory.
» To use the new JDBC datasource for online the following must be performed:
» Copy the hibernate.properties.web.tempate to cm.hibernate.properties.web.tempate
within the relevant directory. For Oracle Utilities Application Framework V2.x the directory is the etc
subdirectory, and Oracle Utilities Application Framework V4.x the directory is the template subdirectory. This
overrides the base template with the custom template.
Add the following lines to the top of the cm template file:
hibernate.connection.datasource = <jdbc_jndi>
hibernate.connection.username = <jndi_user>
hibernate.connection.password = <jndi_user_password>
where :
<jndi_user_password> The password for the user to access the JNDI (Substitution
variable @WEB_WLSYSPASS@ can be used in the template).
23 A technique is to take a copy of the config.xml that is generated from the console changes to see where the changes need to be in the template.
24 This is NOT the database user and password it is the userid and password used to obtain the connection information from the JNDI. This user does
not have to be a user of the product, just have access to the server definitions of the product.
All hibernate.connection.url
hibernate.connection.url entries including any ifdef statements
All hibernate.ucp.* entries
Note:
For later versions of Hibernate use the
org.hibernate.service.jdbc.connections.internal.DatasourceConnectionProviderImpl
class for hibernate.connection.provider_class.
hibernate.connection.provider_class
» Save the file. For Oracle Utilities Application Framework V4.2 and above the sample cm template is shown below:
hibernate.connection.driver_class = @DBDRIVER@
hibernate.connection.datasource = jdbc/ouafdb
hibernate.connection.username = @WEB_WLSYSUSER@
hibernate.connection.password = @WEB_WLSYSPASS@
hibernate.dialect = @DIALECT@
hibernate.show_sql = false
hibernate.max_fetch_depth = 2
hibernate.transaction.factory_class =
org.hibernate.transaction.JDBCTransactionFactory
hibernate.jdbc.fetch_size = 100
hibernate.jdbc.batch_size = 30
hibernate.query.factory_class=org.hibernate.hql.internal.classic.
ClassicQueryTranslatorFactory
hibernate.cache.use_second_level_cache = false
hibernate.connection.provider_class=org.hibernate.connection.Data
sourceConnectionProvider
hibernate.connection.release_mode=on_close
#ouaf_user_exit hibernate.properties.exit.include
Note:
It is possible to execute the initialSetup[.sh] –t command to apply the changes. This avoids an
EAR rebuild.
When any table in the system grows (or shrinks) by a larger than normal rate, the access paths to that table may
change causing inefficiencies. For the database to make the correct decision, it uses a set of statistics to assess all
available paths. This is an important factor in performance. It is therefore recommended that database statistics be
recalculated, using dbmsstats,
dbmsstats on a regular basis to maintain up to date statistics.
The frequency will depend on the volume and size of your database. It is recommended that statistics most tables
be calculated once a week at minimum unless their growth factors do not affect the path chosen by the DBMS.
Note:
CISADM is used as an example in the guidelines below. If your site uses another schema owner, then
substitute that owner in the examples below.
» It is possible to check what is the Last Analyzed Date on product tables are current (or not) by running the
following SQL.
SELECT table_name, last_analyzed FROM dba_tables WHERE owner =
'CISADM';
» It is possible to check check what is the Last Analyzed Date on indexes are current (or not) by running the
following SQL.
SELECT index_name, last_analyzed FROM dba_indexes WHERE owner =
'CISADM';
» If the Indexes are older by a week or more than consider gathering Statistics on them. You can also use the
below SQL which tells approximate number of INSERTs,
INSERT UPDATEs,
UPDATE and DELETEs
DELETE for that table, as well as
whether the table has been truncated, since the last time statistics were gathered.
SELECT * FROM USER_TAB_MODIFICATIONS;
Note:
The MONITORING attribute must be set on individual objects to use this facility.
One of the key practices that are key to performance of a database is the elimination of hot spots in the disk
architecture by ensuring that I/O is spread across all available devices. This is known as the Database Topology.
For example, placing the database physical files on a single disk is not optimal as multiple concurrent requests
queue to use the disk and would result in higher than expected disk wait times. By spreading the load across disks,
the opportunity for wait times is minimized and increases throughput. It is therefore recommended that the disk
architecture be designed for the physical database files so that as much I/O as possible is spread across all disks.
A discussion on the database topology and its implications is outlined in Performance Troubleshooting Guideline
Series (Doc Id: 560382.1) whitepapers available from My Oracle Support.
Note:
Additionally for UTF8 customers ensure that the spl.runtime.cobol.encoding in the
spl.properties file, is set correctly to display the correct character set.
Note:
To ensure sorting and processing are correct for the desired character set ensure the appropriate NLS
initialization settings are set to the correct values. For more information refer to the Globalization Support
Guide component of the Oracle Database documentation.
Application Service Application Service Name is displayed. This can be translated using the CI_MD_SVC_L table where
SVC_NAME is the service name in MODULE and DESCR is the description of the service.
Business Object Business Object Code is displayed. This can be translated using the F1_
F1_BUS_OBJ_L
BUS_OBJ_L table where
BUS_OBJ_CD is the Business Object name in MODULE and DESCR is the description of the Business
Object.
Business Service Business Service Code is displayed. This can be translated using the F1_
F1_BUS_SVC_L
BUS_SVC_L table where
BUS_SVC_CD is the Business service code in MODULE and DESCR is the description of the Business
Service.
Service Script Script Code is displayed. This can be translated using the CI_SCR
CI_SCR_L
SCR_L table where SCR_
SCR_CD is the script
code in MODULE and DESCR is the description of the script.
Note:
If more than one language pack is installed on the product then LANGUAGE_CD must be populated to
return the description in the desired language.
Note:
To use the MODULE feature the hibernate.connection.release_mode must be set to on_close in
the hibernate.properties file. This is the default for Oracle Utilities Application Framework V4.2
and above, earlier releases require manual changes to configuration files.
» ACTION – In Oracle Utilities Application Framework V4.2 and above, the transaction type that is
requested for the MODULE is now populated in the ACTION field of v$session.
v$session If the connection is idle, the
column is blank. The table below lists the valid action values populated:
DEFAULT_ITEM Service is resetting its values to defaults. For example, by pressing the Clear button on the product UI
toolbar
EXECUTE_LIST Service is a list based service and is executing (List Services only)
EXECUTE_SEARCH Service is a search based service and is executing (Search Services only)
EXECUTE_SS Service is a service script (including BPA scripts) and is executing (BPA and Service Scripts only)
READ_SYSTEM Service is a common Oracle Utilities Application Framework based service that is executing
» CLIENT_INFO - In Oracle Utilities Application Framework V4.2 and above, the contents of the Database
Tag characteristic type (up to 6426 characters on the individual user record is now populated in the CLIENT_INFO
field of v$session.
v$session If the connection is idle, the column is blank. A value of up to 64 characters can be used. If
the database tag is not used this value is blank. For example:
26 Refer to the v$session view parameters for the length for the version of Oracle Database used.
27 Example does not include Database Tag, which is not set by default.
Setting the parameter to true enables bitmap plans to be generated for tables with only B-Tree indexes. The Cost
Based Optimizer can choose to use bitmap access paths without the existence of bitmap indexes and in order to do
so; it uses BITMAP CONVERSION FROM ROWIDS and BITMAP CONVERSION TO ROWIDS operations. Those
operations are CPU intensive. If a query in the product for which those operations are performed selects a small
number of rows, then there should not be much of an impact. However, if those queries select a large number of
rows, there may be a negative impact on performance. In order to prevent issues, if you are facing any such issues,
this parameter should be explicitly set to false either at the database level.
Where:
-h Help
This command line can be used in site specific DBA scripts or as a standalone command line. Executing the utility
without any options starts interactive mode.
There are a number of sources of information that can replace a full data model and present the data mode
information into bite sized chunks:
» The data model information is contained in the Data Dictionary component of the Application Viewer.
Note:
Not all Oracle Utilities Application Framework products include a conversion capability.
» Each of the Business Process manuals for the product outlines the functionality and contain data models
specifically for that component.
» From a maintenance cost point of view, all the code is in one place. This reduces maintenance effort.
» Databases implement all or nothing referential integrity. This means that referential integrity is checked whether
the data has changed or not. From a performance point of view this is potentially wasting time. The Maintenance
Objects in product decide when to enforce referential integrity rules.
» Most of the referential rules in product are optional. If there is a value in the foreign key field it is checked, if there
is no value (blanks, zero or nulls) then the referential integrity is not checked unless it is a mandatory column. This
is not possible in database imposed referential integrity.
» If the database controlled referential integrity then the application has no control on when it is imposed in the
course of a transaction. Maintenance Object controlled referential integrity allows finer levels of control on when
referential integrity is enforced in the transaction flow.
» Each database implements referential integrity in a slightly different way. To reduce maintenance costs, code
differences are kept to a minimum.
» Maintenance Object enforced referential integrity is more efficient as far as product is concerned and translates to
superior performance across many database types.
If you insist that you want the data model in a tool or adorning a large wall then the following is recommended
process to be used to generate the data model using the meta-data:
» Export the CISADM schema (with no data) as a backup using the database export utility or use SQL Developer
to clone the schema to a ghost schema.
» Create constraints from the meta-data structure. The two Oracle pl/sql scripts below can be used to achieve this.
The names of the constraints is already documented in the meta data as well. Run the utility and created the
constraints in the database.
Function to join
create or replace function join
(
p_cursor sys_refcursor,
) return varchar2
is
l_result varchar2(32767);
begin
loop
end loop;
return l_result;
end join;
/
show errors;
Script to Create Constraints
SET serverout ON size 1000000
spool constraints.sql
DECLARE
CURSOR c1
IS
SELECT tbl_name,
CONST_ID,
REF_CONST_ID,
table_name
FROM ci_MD_CONST,
user_indexes
UNION ALL
SELECT tbl_name,
CONST_ID,
REF_CONST_ID,
table_name
FROM ci_MD_CONST,
user_indexes
WHERE CONST_TYPE_FLG='FK'
UNION ALL
REF_CONST_ID,
table_name
FROM ci_MD_CONST,
user_indexes
WHERE CONST_TYPE_FLG='FK'
AND TRIM(index_name)=TRIM(REF_CONST_ID)
ORDER BY 1;
---
stmt VARCHAR2(400);
field_list VARCHAR2(300);
BEGIN
FOR r1 IN c1
dbms_output.put_line(stmt);
SELECT
JOIN(CURSOR
(SELECT trim(fld_name)
FROM ci_md_const_fld
WHERE const_id = r1.const_id
ORDER BY seq_num
))
INTO field_list
FROM dual;
SELECT
JOIN(CURSOR
(SELECT trim(fld_name)
FROM ci_md_const_fld
))
INTO field_list
FROM dual;
END LOOP;
END;
/
spool OFF;
Note:
This may take a while for the WHOLE data model.
» Removed the newly created constraints. This is to return the database back to the original condition.
set serverout on size 1000000
spool drop_constraints.sql
select 'ALTER ' || tbl_name || ' drop constraint ' || CONST_ID || ';'
from ci_MD_CONST where CONST_TYPE_FLG='FK' order by tbl_name, CONST_ID;
spool off;
@drop_constraints.sql
exit;
Reload the database. You then have the data model in your tool and the database returned to its original state.
To configure RAC Support for the Oracle Utilities Application Framework configuration:
» Ensure that the database component of the product has been setup using Real Application Clustering to your site
standards with at least one node (RAC One Node can also be used if required).
» Configure the location of the ons.jar file to be used for the installation for the product installation in ONS JAR
Directory menu option. If you have installed the Oracle on a server other than the one the product is installed
upon, then it is recommended to copy the ons.jar file to an accessible location on the server containing the
product. By default the ons.jar file is located in the ons directory under ORACLE_HOME.
ORACLE_HOME
Note:
The administration user used for the product MUST have read permission at least to this file.
In the Database Configuration section of the configureEnv[.sh] utility specify the following:
» For the Database Server and Database Port specify any value (as they will be ignored).
28 In Oracle Utilities Application Framework V4.1 and above, the Universal Connection Pool (UCP) was used for database connectivity. Configuration of
FCF should take this into account (as per the indicated documentation).
» Compression applies to disk as well as memory (in the database buffer cache). This saves both storage costs and
means that more data can be loaded into memory. Whilst there is a CPU overhead with compression, due to the
compression and decompression activities, the memory processing savings may cancel out any overhead to yield
performance improvements.
» Compressed data can co-exist with uncompressed data. For example, compression can be enabled so that any
new data is compressed automatically30.
Oracle offers various levels of compression.
» Basic – Each edition includes a basic compression algorithm.
» Advanced Compression – Advanced Compression is an option on the Enterprise Edition of Oracle and offers
higher levels of compression as well as compression optimized for various activities such as OLTP.
» Hybrid Columnar Compression – This is the newest and most advanced compression algorithm that offers the
highest level of compression and compression optimizations. At the present time this is offered within Oracle
ExaData with hardware assisted compression, to reduce compression CPU overheads.
» Compression can be used with Oracle Utilities Application Framework products as it is transparent to the
underlying product code with all the options described above. Guidelines for compress are available from the
Database Administration Guide. Implementation advice is available in the Advanced Compression with Oracle
11g whitepaper.
29 To use TNSAlias the Oracle Client (or Oracle Database) must be installed on the application server machine to provide the TNS infrastructure and
this installation must be specified in the ORACLE_HOME parameter for the installation.
30 To compress the whole table in this example, it would have to be exported, truncated and reloaded to compress all records.
Access Log HTTP session monitoring Web Transaction usage, session analysis, error rates,
bandwidth usage and click stream analysis.
Security Logs Security Auditing information Web Authentication attempts, lockouts, etc
Oracle WebLogic JMX Monitoring metrics from WebLogic metrics Web, IWS Active Sessions, Request Processing Time,
palette Requests per minute, Pending Requests, Stuck
Threads, Application Status, Certificate Expiry,
JDBC Open Connections, JDBC Free Connections,
JDBC Connection Waivers, JDBC Connections
Closed, JDBC Cache Statements Used, JDBC
Connection Leaks, JDBC Connection Pool Size,
JDBC Connection Request Failures, JDBC
Requests That Waited, JDBC Connections Wait
Success, JDBC Successful Connections (%), Data
Source State, MDB Messages per minute, JMS
Connections, JMS Messages Pending (MDB), JMS
Current Messages, Request Processing Time by
Servlet/JSP, Response Time by Servlet/JSP
JVM JMX Metrics available from any JVM via All CPU Usage, Active Threads, Free Heap, heap
java.lang.management API Size, Nursery Size, Garbage Collector Invocations
per minute, Garbage Collector Invocation Time,
Garbage Collector Execution Time, Garbage
Collector Old Heap Percent Free, Garbage
Collector Percent Time Spent.
Oracle WebLogic Web Web Services and Web Services Manager IWS Execution Time, Invocation Time, Response Count,
Services Metrics specific Metrics (by operation) Response Error Count, Response Time.
Online JMX Business Application Server JMX capability Web, IWS Read Count, Delete Count, Change Count, Add
Count, Default Item Count, Execute BO Count,
Execute BS Count, Execute List Count, Execute
Search Count, Read System Count, Validate
Count, Execute SS Count, Minimum Elapsed Time
per service per transaction type, Maximum Elapsed
Time per service per transaction type, Average
Elapsed Time per service per transaction type.
Batch JMX Batch Cluster JMX capability Batch Thread Count, Member List, Batch Elapsed Time
per thread, Batch Throughput per thread, Number
Processed per thread, Error Number per thread
Operating System Basic operating system metrics All CPU, Memory, Run Queue Length, Disk metrics
Database Basic database monitoring Database Refer to the Database Metrics Manual for full list.
Note:
These metrics are discussed in the Server Administration Guide as well as the Performance
Note:
Refer to Oracle Application Management Pack for Oracle Utilities Overview (Doc Id: 1474435.1) available
from My Oracle Support for a summary of the functionality available.
Appendix
This section primarily outlines advice for customers who are using versions of Oracle Utilities Application Framework
supporting COBOL runtimes. In Oracle Utilities Application Framework V4.3.x and above, COBOL is no longer
supported as a development or runtime language.
Note:
Not all products support COBOL based extensions; therefore this appendix may not apply. Check with your
installation guide for more details.
cobsje -J $JAVA_HOME
Note:
This command should be executed AFTER executing the splenviron[.sh] utility to initialize the
PATH.
environment variables used by the utilities and place the COBOL runtime in the PATH
If the license is NOT installed the response should be similar to the text below:
If the license key is installed correctly the cobjse utility will return a message similar to the following:
cob –v
This should return the output similar to the following:
cob64 -C nolist -v
I see no work
The cob64 indicates the use of 64 bit COBOL.
If the product has COBOL based background processes and the COBOL license is not installed correctly (see
Checking COBOL Installation for more details) then an error message similar to the example below will be
displayed:
com.splwg.base.support.context.ContextFactory.createDefaultContext(Cont
extFactory.java:569): error initializing test context
Note:
This change should not be attempted if the interface using the file is 32 bit as this only applies to 64 bit
COBOL on a 64 Bit operating system.
By default, any 64 bit COBOL based extract product process will create a file up to a 4GB limit. In the unlikely event
that the extract process needs to create a file bigger than 4GB there is a way of instructing the COBOL runtime to
support larger files.
You must create a text based extension configuration file (say cmextfh.cfg)
cmextfh.cfg with the following contents:
[XFH-DEFAULT]
FILEMAXSIZE=8
IDXFORMAT=8
You then place this configuration file in a location that can be referred to by the runtime. You can either deposit the
file in $SPLEBASE/scripts (or %SPLEBASE%\
%SPLEBASE%\scripts)
scripts or in a site specific central location. To enable support
for larger formats your initialize the EXTFH environment variable with the location of the configuration file. For
example:
set EXTFH=D:\
EXTFH=D:\oracle\
oracle\TUGBU\
TUGBU\scripts\
scripts\cmextfh.cfg ( for Windows)
This can be done in your .profile (for Linux/UNIX) or using the facilities outlined in Custom Environment Variables or
JAR files.
For additional details and additional parameters refer to My Oracle Support Doc Id: 817617.1.
It is worth considering more instances of the Child JVM's if any of the following situations occur:
The site has a large number of users (>800) which use a large proportion of the product over the business day. In
this case there are a lot of potential calls to COBOL modules by different users and to avoid out of memory
conditions it is important to have more child JVM's available. This situation can also be negated by the presence of
more than one Web Application Server as each Web Application Server has its own Child JVM's.
In most cases the default value for the number of Child JVM's is sufficient for most non-production situations. Refer
to the Production Environment Configuration Guidelines for production level settings.
The COBOL processes (expressed as shared libraries and executables on the operating system) typically are
attached to the JVM when they are first executed and remain attached as long as the JVM is executing for reuse.
This has an unfortunate consequence in that the thread bound memory used by those COBOL objects cannot be
released until the parent process (in this case the JVM) has stopped executing (i.e. dies). This thread-bound
memory is primarily memory allocated by the Microfocus runtime on the C heap. As threads return to the thread
pool and are used again to process calls to different COBOL objects, the memory footprint may continue to grow as
different COBOL objects are called. Over time it may be the case that each thread allocates memory for the
complete set of objects. If not managed correctly this situation can lead to out of memory conditions.
As the Child JVM has limited control over individual object a number of key elements have been added to the Oracle
Utilities Application Framework (that require configuration) to optimize memory management of the Child JVMs:
Load is balanced across the available Child JVM's allocated to the product using a round robin technique to reduce
the impact of memory increases.
Child JVM's reuse existing loaded modules as much as possible. An individual module that has been called is only
attached once per Child JVM at any given time.
An installation parameter in the Environment Configuration called Release Cobol Thread Memory controls this
behavior. This value should be set to true. This can be overridden for each mode of access (online, batch and XAI)
by specifying the spl.runtime.cobol.remote.releaseThreadMemoryAfterEachCall parameter in the
spl.properties file.
Note:
Refer to the Batch Best Practices (Doc Id: 836362.1) whitepaper available from My Oracle Support for
advice pertaining the optimal setting of this parameter for background processes.
To reclaim memory of the COBOL objects, the Child JVM must be shunned (stopped and restarted) on a regular
basis. This is known as brute force memory management. The Oracle Utilities Application Framework allows control
of this in the relevant spl.properties file by setting the following parameters:
Parameter Comments
As soon as either tolerance is met the Child JVM is shunned automatically. This does not necessarily occur
straightaway as it waits for any uncompleted outstanding work in the individual Child JVM to complete. As the
product uses more than one Child JVM at any time, availability is not compromised as at least one Child JVM is
active at any time.
The default values for these parameters are sufficient for most sites. Refer to the Server Administration Guide
supplied with the product for the default values and additional advice on this facility.
With the above facilities the COBOL memory within the Child JVM can be managed by the Oracle Utilities
Application Framework to help avoid memory issues.
In some situations, the Child JVM's may spin. This causes multiple startup/shutdown Child JVM messages to be
displayed and recursive child JVM's to be initiated and shunned. If the following:
If this issue occurs at your site then there are a number of options to address the issue:
» Configure an Operating System level kill command to force the Child JVM to be shunned when it becomes stuck.
» Configure a Process.destroy command to be used if the kill command is not configured or desired.
» Specify a time tolerance to detect stuck threads before issuing the Process.destroy or kill commands.
Note:
This facility is also used when the Parent JVM is also shutdown to ensure no zombie Child JVM's exit.
The following additional settings must be added to the spl.properties for the Business Application Server to use
this facility:
» spl.runtime.cobol.remote.kill.command – Specify the command to kill the Child JVM process. This can
be a command or specify a script to execute to provide additional information. The kill command property can
accept two arguments, {pid} and {jvmNumber},
{jvmNumber} in the specified string. The arguments must be enclosed in
curly braces as shown here.
Note:
The PID will be appended to the killcmd string, unless the {pid} and {jvmNumber} arguments are
specified. The jvmNumber can be useful if passed to a script for logging purposes.
Note:
Unless otherwise specified it is recommended to use the kill command option if shunning JVM's is an
issue. There this value can remain its default value, false, unless otherwise required.
» spl.runtime.cobol.remote.kill.delaysecs – Specify the number of seconds to wait for the Child JVM
to terminate naturally before issuing the Process.destroy or kill commands. Default is 10 seconds.
For example:
spl.runtime.cobol.remote.destroy.enabled=false
spl.runtime.cobol.remote.kill.delaysecs=10
» When a Child JVM is to be recycled, these properties are inspected and the
spl.runtime.cobol.remote.kill.command,
spl.runtime.cobol.remote.kill.command executed if provided. This is done after waiting for
spl.runtime.cobol.remote.kill.delaysecs seconds to give the JVM time to shut itself down. The
spl.runtime.cobol.remote.destroy.enabled property must be set to true AND the
spl.runtime.cobol.remote.kill.command omitted for the old Process.destroy command to be used
on the process.
Note:
By default the spl.runtime.cobol.remote.destroy enabled is set to false and is therefore
disabled.
if [ "$1" = "" ]
then
fi
javaexec=cobjrun
ps e $1 | grep -c $javaexec
if [ $? = 0 ]
then
echo "$THETIME: Process $1 is an active $javaexec process -- issuing
kill
-9 $1" >>$SPLSYSTEMLOGS/forcequit.log
kill -9 $1
exit 0
else
echo "$THETIME: Process id $1 is not a $javaexec process or not
active --
fi
Note:
The above script is a sample only.
» This script's name would then be specified as the value for the spl.runtime.cobol.remote.kill.command
property, for example:
spl.runtime.cobol.remote.kill.command=forcequit.sh
» The forcequit script does not have any explicit parameters but {pid} is passed automatically.
» To use the jvmNumber parameter it must explicitly specified in the command. For example, to call script
forcequit.sh and pass it the {pid} and the child JVM number ({jvmNumber}), specify it as follows:
spl.runtime.cobol.remote.kill.command=forcequit.sh {pid} {jvmNumber}
» The script can then use the JVM number for logging purposes or to further ensure that the correct pid is
being killed.
» If the arguments are omitted, the {pid} is automatically appended to the
spl.runtime.cobol.remote.kill.command string.
CONNECT W ITH US
blogs.oracle.com/theshortenspot Copyright © 2007-2017, Oracle and/or its affiliates. All rights reserved. This document is provided for information purposes only, and
the contents hereof are subject to change without notice. This document is not warranted to be error-free, nor subject to any other
facebook.com/oracle warranties or conditions, whether expressed orally or implied in law, including implied warranties and conditions of merchantability or
fitness for a particular purpose. We specifically disclaim any liability with respect to this document, and no contractual obligations are
formed either directly or indirectly by this document. This document may not be reproduced or transmitted in any form or by any
twitter.com/theshortenspot means, electronic or mechanical, for any purpose, without our prior written permission.
oracle.com Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.
Intel and Intel Xeon are trademarks or registered trademarks of Intel Corporation. All SPARC trademarks are used under license and
are trademarks or registered trademarks of SPARC International, Inc. AMD, Opteron, the AMD logo, and the AMD Opteron logo are
trademarks or registered trademarks of Advanced Micro Devices. UNIX is a registered trademark of The Open Group. 1216