Sunteți pe pagina 1din 99

An Oracle White Paper June 2011

Technical Best Practices Oracle Utilities Application Framework

An Oracle White Paper June 2011 Technical Best Practices Oracle Utilities Application Framework

Oracle Utilities Application Framework - Technical Best Practices

Technical Best Practices

..................................................................... Conventions used in this whitepaper

..............................................

5

5

Introduction Background of Oracle Utilities Application Framework

.........................................................................................

.......................

6

6

Installation Best Practices

................................................................... Read the Installation Guide

............................................................. Ensure the prerequisites are installed

.............................................

8

8

8

Environment Practices

9

.................................................................... Using multiple administrators

........................................................

10

Checking Java Installation

Oracle Utilities Application Framework - Technical Best Practices Technical Best Practices ..................................................................... Conventions used in this

...................................................

Checking COBOL Installation Additional Oracle WebLogic Installation settings COBOL License Errors in Batch .......................................... Location of Installation Logs XML Parser Errors in installation ......................................... AppViewer cannot Co-Exist in Archive Mode ......................

..............................................

Oracle Utilities Application Framework - Technical Best Practices Technical Best Practices ..................................................................... Conventions used in this

................

Oracle Utilities Application Framework - Technical Best Practices Technical Best Practices ..................................................................... Conventions used in this

................................................

Implementing Secure Protocols (https/t3s)

Oracle Utilities Application Framework - Technical Best Practices Technical Best Practices ..................................................................... Conventions used in this

........................

10

11

12

13

13

13

14

15

General Best Practices

..................................................................... Limiting production Access

...........................................................

17

17

Regular Collection and Reporting of Performance Metrics

...........

18

Respecting Record Ownership

.....................................................

18

Backup of Logs

............................................................................. Post Process Logs

........................................................................

19

19

Check Logs For Errors Optimize Operating System Settings

..................................................................

............................................

20

20

Optimize connection pools

............................................................ Read the available manuals

..........................................................

21

23

Technical Documentation Set Whitepapers available

...................

25

Implementing Industry Processes Using Automated Test Tools

.................................................

.........................................................

26

27

Oracle Utilities Application Framework - Technical Best Practices

Custom Environment Variables or JAR files

.................................

28

Help and AppViewer can be used standalone

..............................

29

Re-Register only when necessary Secure default userids

................................................

.................................................................. Consider different userids for different modes of access

..............

29

30

30

Don’t double audit Use Identical Machines

.........................................................................

................................................................. Regularly restart machines

...........................................................

31

31

31

Avoid using direct SQL to manipulate data

...................................

32

Minimize Portal Zones not used

....................................................

32

Routine Tasks for Operations

.......................................................

33

Typical Business Day

.................................................................... Login Id versus Userid

.................................................................. Hardware Architecture Best Practices Failover Best Practices

..........................................

................................................................. Online and Batch tracing and Support Utilities

Oracle Utilities Application Framework - Technical Best Practices Custom Environment Variables or JAR files ................................. 28

....................

.................................

General Troubleshooting Techniques Data Management Best Practices

.....................................................

33

34

35

39

41

41

43

Respecting Data Diversity Archiving ....................................................................................... Data Retention Guidelines

.............................................................

............................................................

43

44

46

Removal of Staging Records Partitioning

........................................................

.................................................................................... Compression Database Clustering ...................................................................... Backup and Recovery

.................................................................................

................................................................... Writing Files Greater than 4GB

.....................................................

47

49

50

51

51

52

Client Computer Best Practices

53

........................................................ Make sure the machine meets at least the minimum specification 53

Internet Explorer Caching Settings

...............................................

53

Oracle Utilities Application Framework - Technical Best Practices

Clearing Internet Explorer Cache

..................................................

54

Optimal Network

Card Settings

.....................................................

54

Network Best Practices

..................................................................... Network bandwidth ........................................................................ Ensure legitimate Network Traffic Regularly check network latency

.................................................

................................................... Web Application Server Best Practices

.............................................

54

54

55

55

56

Make sure that the access.log is being created

............................

56

Examine Memory Footprint

........................................................... Optimize Garbage Collection

........................................................

57

58

Turn off Debug Load balancers ..............................................................................

..............................................................................

58

58

Preload or Not?

.............................................................................

59

Native or Product provided utilities?

..............................................

60

Hardware or software proxy

..........................................................

61

What is the number of Web Application instances do I need?

......

61

Configuring the Client Thread Pool Size

....................................... Defining external LDAP to the Web Application Server

................

62

64

Synchronizing LDAP for security ...................................................

66

Appropriate use of AppViewer

......................................................

68

Fine Grained JVM Options Customizing the server context Clustering or Managed?

69

70

71

73

............................................................

.....................................................

................................................................ Allocate port numbers appropriately

............................................. Monitoring and Managing the Web Application Server using JMX 74

Enabling autodeployment for Oracle WebLogic console Password Management solution for Oracle WebLogic Error configuring Oracle WebLogic
Enabling autodeployment for Oracle WebLogic console
Password Management solution for Oracle WebLogic
Error configuring Oracle WebLogic credentials
76
76
77

Corrupted SPLApp.war Web Application Server Logs

Oracle Utilities Application Framework - Technical Best Practices Clearing Internet Explorer Cache .................................................. 54 Optimal Network

.......................................................

..............................................

78

78

Oracle Utilities Application Framework - Technical Best Practices

IBM WebSphere Specific Advice

IBM WebSphere Specific Advice

79

82

Distributed or local installation

82

Number of Child JVMS

82

COBOL

83

Cache Management

84

Monitoring and Managing the Business Application Server using JMX

85

Database Connection Management

Database Connection Management

88

XPath Memory Management ...............................................

88

Database Best Practices

89

90

91

Use the Correct NLS settings (Oracle)

Use the Correct NLS settings (Oracle)

91

Monitoring database connections

92

Consider changing Bit Map Tree parameter

92

OraGenSec command line Parameters

93

SetEnvId command line Parameters

93

Building the Data Model

94

Oracle Utilities Application Framework - Technical Best Practices

Technical Best Practices

This white paper outlines the common and best practices used by IT groups at sites using Oracle Utilities Application Framework based products and Oracle internal studies, around the world, that have benefited sites in some positive way. This information is provided to guide other sites in implementing or maintaining the product.

While all care has been taken in providing this information, implementation of the practices outlined in this document may NOT guarantee the same level of (or any) improvement. Some of these practices may not be appropriate for your site. It is recommended that each practice be examined in light of your particular organizational policies and use of the product. If the practice is deemed beneficial to your site, then consider implementing it. If the practice is not appropriate (e.g. for cost and other reasons), then it should not be considered.

This whitepaper covers V2.x and above of the Oracle Utilities Application Framework based products. Where advice is applicable to a particular version of the product, a specific reference to that version is displayed. For V1.x customers, specific information for V1 is located in the V1 Addendum version of this document.

Note: For publishing purposes, the word "product" will be used to be denote all Oracle Utilities Application Framework based products.

Note: Advice in this document is primarily applicable to the latest version of the Oracle Utilities Application Framework at time of publication. Some of this advice may apply to other versions of the Oracle Utilities Application Framework and may be applied at site discretion.

Note: In some sections of this document the environment variable $SPLEBASE (or %SPLEBASE%) is used. This denotes the root location of the product install. Substitute the appropriate value for the environment used at your site.

Conventions used in this whitepaper

The advice in this document applies to any product based upon Oracle Utilities Application Framework versions 2.1 and above. Refer to the installation documentation to verify which version of the framework applies to your version of the product. For publishing purposes the specific facilities and instructions for specific framework versions will be indicated with icons:

Oracle Utilities Application Framework - Technical Best Practices Technical Best Practices This white paper outlines the

Advice or instructions marked with this icon apply to Oracle Utilities Application

Oracle Utilities Application Framework - Technical Best Practices

Framework V2.1 based products and above.

  • Advice or instructions marked with this icon apply to Oracle Utilities Application Framework V2.2 based products and above.

  • Advice or instructions marked with this icon apply to Oracle Utilities Application Framework V4.0 based products and above.

  • Advice or instructions marked with this icon apply to Oracle Utilities Application Framework V4.1 based products and above.

Introduction

Implementation of the product at any site introduces new practices into the IT group to maintain the health of the system and provide the expected service levels demanded by the business. While configuration of the product is important to the success of the implementation (and subsequence maintenance), adopting new practices can help ensure that the system will operate within acceptable tolerances and support the business goals.

This white paper outlines some common practices that have been implemented at sites, around the globe, that have proven beneficial to that site. They are documented here so that other sites may consider adopting similar practices and potentially deriving benefit from them as well.

The recommendations in this document are based upon experiences from various sites and internal studies, which have benefited from implementing the practices outlined in the document.

Background of Oracle Utilities Application Framework

The Oracle Utilities Application Framework is a reusable, scalable and flexible java based framework which allows other products to be built, configured and implemented in a standard way.

When Oracle Utilities Customer Care & Billing was migrated from V1 to V2, it was decided that the technical aspects of that product be separated to allow for reuse and independence from technical issues. The idea was that all the technical aspects would be concentrated in this separate product (i.e. a framework) and allow all products using the framework to concentrate on delivering superior functionality. The product was named the Oracle Utilities Application Framework (oufw is the product code).

The technical components are contained in the Oracle Utilities Application Framework which can be summarized as follows:

Metadata – The Oracle Utilities Application Framework is responsible for defining and using the metadata to define the runtime behavior of the product. All the metadata definition and management is contained within the Oracle Utilities Application Framework.

Oracle Utilities Application Framework - Technical Best Practices

UI Management – The Oracle Utilities Application Framework is responsible for defining and rendering the pages and responsible for ensuring the pages are in the appropriate format for the locale.

Integration – The Oracle Utilities Application Framework is responsible for providing the integration points to the architecture. Refer to the Oracle Utilities Application Framework Integration Overview for more details.

Tools – The Oracle Utilities Application Framework provides a common set of facilities and tools that can be used across all products.

Technology – The Oracle Utilities Application Framework is responsible for all technology standards compliance, platform support and integration.

The figure below summarizes some of the facilities that the Oracle Utilities Application Framework provides:

Meta Data

Layout Personalization Scripting Roles Rules Language Localization Business Services Business Objects Maintenance Objects DB Structure

UI Management

Zones Portal Language Locale BPA Scripting UI Maps

Integration

 

Tools

XAI

Scheduler

Web Services

Dictionary

Staging

Conversion

To Do

Security

Auditing

Algorithm

Scripting

Technology

Multi -DB

XML Services

J2EE

AJAX

SOA

Figure 1 - Overview of Oracle Utilities Application Framework components

There are a number of products from the Tax and Utilities Global Business Unit as well as from the Financial Services Global Business Unit that are built upon the Oracle Utilities Application Framework. These products require the Oracle Utilities Application Framework to be installed first and then the product itself installed onto the framework to complete the installation process.

There are a number of key benefits that the Oracle Utilities Application Framework provides to these products:

Common facilities – The Oracle Utilities Application Framework provides a standard set of technical facilities that mean that products can concentrate in the unique aspects of their markets rather than making technical decisions.

Oracle Utilities Application Framework - Technical Best Practices

Common methods of configuration – The Oracle Utilities Application Framework standardizes the technical configuration process for a product. Customers can effectively reuse the configuration process across products.

Common methods of implementation - The Oracle Utilities Application Framework standardizes the technical aspects of a product implementation. Customers can effectively reuse the technical implementation process across products.

Quicker adoption of new technologies – As new technologies and standards are identified as being important for the product line, they can be integrated centrally benefiting multiple products.

Multi-lingual and Multi-platform - The Oracle Utilities Application Framework allows the products to be offered in more markets and across multiple platforms for maximized flexibility

Cross product reuse – As enhancements to the Oracle Utilities Application Framework are identified by a particular product, all products can potentially benefit from the enhancement.

Note: Use of the Oracle Utilities Application Framework does not preclude the introduction of product specific technologies or facilities to satisfy markets. The framework minimizes the need and assists in the quick integration of a new product specific piece of technology (if necessary).

Installation Best Practices

During the initial phases of an implementation, a copy of the product will need to be installed. During the implementation a number of copies of additional copies will be installed, including production. This section outlines some practices that customers have used to make this process smooth.

Read the Installation Guide

One of the most important pieces of advice in this document to implement is to read the installation guide. It provides valuable information about what needs to be installed and configured as well the order of the installation. Failure to follow the instructions can cause unnecessary delays to the installation.

If you

are upgrading to

a

new version, read the new installation guide as well as it will contain

instructions on how to upgrade to the new version as well as details of what has been changed in the

new version.

 

Ensure the prerequisites are installed

When installing there is a number of third party prerequisite software that must be obtained (i.e. downloaded) prior to the actual installation of product software can commence. Read the Installation Guide and Quick Installation Guide to download and install the prerequisite software prior to installing product.

Oracle Utilities Application Framework - Technical Best Practices

Note: For customers who are upgrading, the installation of product and its related third party software is designed so that more than one version of product can co-exist.

Environment Practices

Note: There is a more detailed discussion of effective Environment Management in the Environment Management document of the Software Configuration Management series of whitepaper. Refer to that document for further advice.

When installing product at a site, each copy of product is regarded as an environment to perform a particular task or group of tasks. Typically, without planning this can lead to a larger than anticipated number of environments. This can have a possible negative flow on effect by increasing overall maintenance effort and increasing resource usage (hardware and people), which may in turn cause delays in implementations. Customers to minimize the impact of environments on their implementations have used the following advice:

At the start of the implementation decide the number of environments to use. Keep this to a minimum and consider sharing environments between tasks. Another technique associated with this is to specify an end date for each environment. This is the date the environment can be removed from the implementation. This can force rethinks on the number of environments that are to be used at an implementation and may force sharing.

For each environment, consider the impact on the hardware and maintenance effort including the following:

The time and resources it takes to install the environment.

The time and resources it takes to keep the environment up to date including application of single fixes, rollups/service packs and upgrades. Do not forget application and management of customization builds.

The time and resources to maintain the ConfigLab and Archiving facilities for multiple environments, if used at an implementation. This includes the setup and regular migrations that will be performed.

Note: ConfigLab and Archiving only apply to certain Oracle Utilities Application Framework products

The time and resources it takes to backup and restore environments on a regular basis. In some implementations, having different backup schemes for environments based upon tasks and update frequency for that environment, i.e. more updated = more frequent backup, may provide some savings.

The time and resources to manage the disk space for each environment including regular cleanups.

Environments may be setup so that the database can be reduced to a single database instance with each environment having a different schema/owner. This will reduce the memory footprint of the DBMS on the machine but may reduce availability of the database instance is shut down (all environments are

Oracle Utilities Application Framework - Technical Best Practices

affected). For non-production, most customers create a database instance, for Oracle sites, for each environment and one database subsystem, for DB2/UDB sites, customers for each environment.

Using multiple administrators

By default, when installing product a single administrator account (usually referred to as splsys) is used to install and own the product. This is the default behavior of the installation and apart from specifying a different userid than the default splsys, it is possible to use other userids to own all or individual environments.

For example, if the conversion team wishes to have the ability to start, stop and monitor their own environments, you can create another administrator account and install their copies of product using that userid. This allows the conversion team to control their own environments. If you did not have the ability to use multiple administrators than they may have access to all environments (as you would have to give them access to the splsys account).

One of the advantages of this approach is that you can delegate management of a copy product to other teams without compromising other environments. Another advantage is that you can quickly identify UNIX resource ownership by user rather than trying using other methods.

The only disadvantage is that to manage all copies of product you will need to logon to the additional administration accounts that own the various copies.

Checking Java Installation

Oracle Utilities Application Framework - Technical Best Practices affected). For non-production, most customers create a database
Oracle Utilities Application Framework - Technical Best Practices affected). For non-production, most customers create a database

Note: For Oracle Utilities Application Framework V4.1 and above , it is possible to use two differing Java Virtual Machine versions if COBOL is used as it is possible to configure the CHILD_JVM_JAVA_HOME separately. If this is the case then repeat this process for the CHILD_JVM_JAVA_HOME JVM.

When the product is installed one of the first perquisites to be verified is the version of Java installed and referenced using the environment variable $JAVA_HOME (or %JAVA_HOME% on Windows). Whilst the product checks this version it can be checked manually prior to installation (and at anytime) using the following commands:

$JAVA_HOME/bin/java –version

Or (on Windows):

%JAVA_HOME%\bin\java –version

For example:

Linux:

#> $JAVA_HOME/bin/java -version java version "1.6.0_18" Java(TM) SE Runtime Environment (build 1.6.0_18-b07) Java HotSpot(TM) 64-Bit Server VM (build 16.0-b13, mixed mode)

Oracle Utilities Application Framework - Technical Best Practices

AIX:

#> $JAVA_HOME/bin/java -version

java version "1.6.0" Java(TM) SE Runtime Environment (build pap6460sr7ifix-

20100220_01(SR7+IZ70326))

IBM J9 VM (build 2.4, JRE 1.6.0 IBM J9 2.4 AIX ppc64-64 jvmap6460sr7-20100219_54049 (JIT enabled, AOT enabled) J9VM - 20100219_054049 JIT - r9_20091123_13891

GC

- 20100216_AA)

JCL

- 20091202_01

Windows:

C:\> %JAVA_HOME%\bin\java -version java version "1.6.0_20" Java(TM) SE Runtime Environment (build 1.6.0_20-b02) Java HotSpot(TM) Client VM (build 16.3-b01, mixed mode, sharing)

HP-UX:

#> $JAVA_HOME/bin/java -version java version "1.6.0.10" Java(TM) SE Runtime Environment (build 1.6.0.10-

jinteg_11_mar_2011_09_19-b00)

Java HotSpot(TM) Server VM (build 19.1-b02-jinteg:2011mar11- 07:33, mixed mode)

Note: Verify the java version number and operating mode (32/64 bit) against the Quick Installation Guide provided with the product.

Checking COBOL Installation

Oracle Utilities Application Framework - Technical Best Practices AIX: #> $JAVA_HOME/bin/java -version java version "1.6.0" Java(TM)

Note: Not all products support COBOL based extensions; therefore this section may not apply. Check with your installation guide for more details.

By default, when the COBOL runtime is installed a license file is required to complete the installation as outlined in the Quick Installation Guide for the product. The license can be tracked using the process outlined in the Installation Guide or the following command:

cobsje -J $JAVA_HOME

Note: This command should be executed AFTER executing the splenviron[.sh] utility to initialize the environment variables used by the utilities and place the COBOL runtime in the PATH.

If the license is NOT installed the response should be similar to the text below:

Error - No license key detected. Application Server requires a license key in order to execute. Please refer to your application supplier.

Oracle Utilities Application Framework - Technical Best Practices

Well this message indicates that there is an issue dealing with the license key on the server. If this message appears to remedy the situation it is recommended that the COBOL runtime be re-installed and re-initialized the license key using apptrack as per the Installation Guide for the product.

If the license key is installed correctly the cobjse utility will return a message similar to the following:

#> cobsje -J $JAVA_HOME Java version = 1.6.0_20 Java vendor = Sun Microsystems Inc. Java OS name = SunOS Java OS arch = sparcv9 Java OS version = 5.10

Additionally the 64 Bit version of COBOL is required to be used for 64 bit platforms as indicated in the Installation Guide for the product. To verify that the COBOL runtime is 64 bit the f

cob –v

This should return the output similar to the following:

cob64 -C nolist -CC -KPIC -A -KPIC -N PIC -v I see no work

The cob64 indicates the use of 64 bit COBOL.

Additional Oracle WebLogic Installation settings

Oracle Utilities Application Framework - Technical Best Practices Well this message indicates that there is an

When installing the Oracle WebLogic Server product as a prerequisite installation there are a number of additional advice that can be taken into account to optimize the installation:

Avoid installing the Oracle WebLogic Server in the home directories of users for Linux installations. Other application Linux users, such as the Oracle Utilities Application Framework administration user, should not access the home directories or any subdirectory of the home directory.

If the platform uses a hybrid 32/64 bit JDK, such as HP-PA, HPIA and Solaris64, then include the –d64 flag when initiating the installation of Oracle WebLogic Server to ensure that 64 bit is used. For example, if installing in graphical mode using the Package installer:

HP-UX/Unix:

java -d64 -jar wlsversion_generic.jar

Solaris64:

java -Xmx1024m -jar wlsversion_generic.jar

Windows:

Oracle Utilities Application Framework - Technical Best Practices

java -D64 -jar wlsversion_generic.jar

COBOL License Errors in Batch

Oracle Utilities Application Framework - Technical Best Practices java -D64 -jar wlsversion_generic.jar COBOL License Errors in

If the product has COBOL based background processes and the COBOL license is not installed correctly (see Checking COBOL Installation for more details) then an error message similar to the example below will be displayed:

…cobjrun64: com.splwg.base.api.batch.ThreadPoolWorker.main ended due to an exception

Exception in thread "main" com.splwg.shared.common.LoggedException:

The following stacked messages were reported as the LoggedException was rethrown:

com.splwg.base.support.context.ContextFactory.createDefaultCo ntext(ContextFactory.java:569): error initializing test context

To resolve this issue refer to the instructions in the Quick Installation Guide about installing the COBOL license.

Location of Installation Logs

Oracle Utilities Application Framework - Technical Best Practices java -D64 -jar wlsversion_generic.jar COBOL License Errors in

When installing the product a log file is written for each component installed (Oracle Utilities Application Framework is a component of the installation, the product install is a separate installation component).

The log contains all the messages pertaining to the installation process including any error messages for installation errors encountered. The log is located in the directory the installation was initiated from and the name is in the format:

install_<product>_<environment>.log

Where:

<product>

Product code of the product component you are installing. For example, FW = Oracle Utilities Application Framework

<environment>

Name of the environment that is being installed. Check this log for any error messages during the installation process.

XML Parser Errors in installation

Oracle Utilities Application Framework - Technical Best Practices java -D64 -jar wlsversion_generic.jar COBOL License Errors in

The Oracle Client is used by the installers and

utilities to provide access to the Perl runtime and

associated libraries used by the installer and utilities. This is the first configuration question in the installation process.

The Oracle Client can be installed (if the product is not installed on a machine containing the Oracle Database software) or an existing ORACLE_HOME can be specified if the Oracle Database software is

Oracle Utilities Application Framework - Technical Best Practices

installed already on the machine (as it contains the Oracle Client in the installation). The value is stored in the ENVIRON.INI as the value for parameter ORACLE_CLIENT_HOME.

Note: For Windows Server environments, both 32 bit client MUST be installed for use with the installation utilities. This is even if the 64 Bit Oracle Database software is installed on the same machine.

If the Oracle Client or ORACLE_HOME is invalid then the following error will be returned by the installation utilities (and other installs):

Can't locate XML/Parser.pm in @INC (@INC contains: … BEGIN failed--compilation aborted at data/bin/perllib/SPL/splXMLParser.pm line 3. Compilation failed in require at data/bin/perllib/SPL/splExternal.pm line 10. BEGIN failed--compilation aborted at data/bin/perllib/SPL/splExternal.pm line 10. Compilation failed in require at install.plx line 25. BEGIN failed--compilation aborted at install.plx line 25. Error: install.plx didn't finish successfully. Exiting.

Ensure that the ORACLE_CLIENT_HOME includes the perl subdirectory to rectify this issue.

AppViewer cannot Co-Exist in Archive Mode

Oracle Utilities Application Framework - Technical Best Practices installed already on the machine (as it contains

The Application Viewer is an optional component that provides a meta data viewer for data dictionary,

batch controls, to do types, javadoc etc

..

It is primarily designed for use by the developers and key

architects at your site 1 . If the site decides to move between expanded mode

  • 2 to archive

mode (or visa versa) on Oracle

WebLogic installations then when executing initialSetup[.sh] the product may report the

following error:

AppViewer.war cannot co exist with AppViewer directory

For archive mode the AppViewer.war is required and for expanded mode the AppViewer directory is used. The error message indicates both exist. This can occur when the expanded mode is changed and the initialSetup[.sh] utility. To resolve this issue, depended on the value of WEB_ISEXPANDED parameter, the following recommended:

TABLE 1 – APPVIEWER CO-EXIST ERROR RESOLUTION

WEB_ISEXPANDED VALUE

COMMENTS

  • 1 Generally customers do not implement the AppViewer in production.

  • 2 Expanded mode is only available for Oracle WebLogic and Oracle Utilities Application Framework V4.0 and above.

Oracle Utilities Application Framework - Technical Best Practices

WEB_ISEXPANDED VALUE

COMMENTS

true

Remove or rename AppViewer.war Remove or rename AppViewer directory.

false

Implementing Secure Protocols (https/t3s)

Oracle Utilities Application Framework - Technical Best Practices WEB_ISEXPANDED VALUE COMMENTS true Remove or rename AppViewer.war

Note: For customers using Oracle Utilities Application Framework V4.1 and above

Oracle Utilities Application Framework - Technical Best Practices WEB_ISEXPANDED VALUE COMMENTS true Remove or rename AppViewer.war

the use of secure protocol

can be enabled by specifying a HTTPS port using the configureEnv[.sh] –a utility and specifying a port

number under WebLogic SSL Port Number.

Note: The instructions below are designed for Oracle WebLogic installations only. Additional steps are required in IBM WebSphere to enable secure transmission of data. Refer to the appropriate documentation for additional advice.

Note: Some of the instructions below recommend changes to individual configuration files. These manual changes may be overridden by executions of the initialSetup[.sh] utility back to the product defaults. To retain the changes across invocations of the initialSetup[.sh] utility it is recommended to use custom templates and/or configuration file user exits. Refer to the Server Administration or Configuration and Operations Guide for more details of implementing custom templates and/or configuration file user exits.

By default, all transmission of data is using the http and/or t3

  • 3 protocol between the various tiers of

the product. Whilst this default situation is sufficient for the vast majority of customers, some sites wish to implement the secure versions of these protocols for use with the product. The reason for their use is typically to encrypt all transmission of data from the client to the server and within the server tiers themselves.

Note: Enabling https or t3s may result in higher resource usage due to the resource requirements to encrypt and decrypt data. The extent of the resource usage will vary from platform to platform. It is advised that customer compare performance between secure and non-secure protocols before committing to secure protocols.

To implement the more secure protocol requires a number of changes and additional facilities to be enabled. The process below outlines the generic process for implementing the secure protocol:

Obtain a digital certificate or generate a certificate from keytool for your organization from a trusted certificate authority. This is used for the encryption/decryption of data using the protocol.

Note: The certificate provided with the J2EE Web Application Server installation is to be used for demonstration purposes only. It is highly recommended that alternative certificate be used for production environments.

3 The t3 protocol is only used for sites that have separated the Web Application and Business Application tiers using the Oracle WebLogic platform on selected versions of the Oracle Utilities Application Framework. The iiop protocol is used for the same scenario but for IBM WebSphere platforms.

Oracle Utilities Application Framework - Technical Best Practices

Configure J2EE Web Application Server SSL support to use the certificate as outlined in the documentation sites outlined below 4 :

TABLE 2 – J2EE SSL CONFIGURATION

WEB APPLICATION SERVER

REFERENCE

Oracle WebLogic 10 MP2

Oracle WebLogic 10.3.x

Oracle WebLogic 10.3.3

IBM WebSphere 6.1

IBM WebSphere 7.x

Enable the HTTPS port on your environment using the console provided with your J2EE Web Application Server. Remember to reference the certificate you processed in the previous step.

Note: For customers using Oracle WebLogic on Oracle Utilities Application Framework V4.1 and above the setting for WebLogic SSL Port Number will enable this facility without the need of the console.

Note: If changes are made to the console then to retain the change across upgrades and service packs it is recommended to use custom templates or user exits to retain the setting. Refer to the Server Administration or Configuration and Operations Guide for more details of implementing custom templates. For Oracle WebLogic customers the config.xml templates may require changes.

Examine the $SPLEBASE/etc/conf directory (or %SPLEBASE%\etc\conf on Windows), unless otherwise indicated, for configuration files that use the protocol:

TABLE 3 – SSL CONFIGURATION FILES

CONFIGURATION FILE

CHANGES

spl.properties

Change references to the t3 protocol to t3s, if exists Change references to the http protocol to https with the SSL port replacing the HTTP ports

web.xml

Change references to the http protocol to https with the SSL port replacing the HTTP ports

web.xml.XAIApp

Change references to the http protocol to https with the SSL port replacing the HTTP ports

ejb-jar.xml

Change references to the http protocol to https with the SSL port replacing the HTTP ports. This file is located under $SPLEBASE/splapp/businessapp/config/META-INF (or %SPLEBASE%\splapp\businessapp\config\META-INF on Windows)

Note: If these files are changed they may revert to the product template versions across service packs and upgrades. To retain change across service packs and upgrades it is advised to use custom templates and/or user exits. Refer to the Server Administration or Configuration and Operations Guide for more details.

4 For Oracle WebLogic customers, refer to the section Configuring Identity and Trust for the additional steps.

Oracle Utilities Application Framework - Technical Best Practices

Shutdown the J2EE Web Application Server to prepare to reflect the changes.

Run the initialSetup[.sh] –w command to reflect the changes into the server files

Restart the J2EE Web Application Server.

Ensure that any Feature Configuration options using the product browser that use the HTTP protocol as part of their options are also converted to HTTPS and the appropriate port number. Use the Admin F Feature Configuration menu option to check each of them. The Features will vary from product to product and version to version.

Ensure that any XAI JNDI Server provider URLS using the product browser that use the http/t3 protocol as part of their options are also converted to https/t3s and the appropriate port number. Use the Admin X XAI JNDI Server menu option to maintain the JNDI server.

Any customization that refers to the HTTP protocol such as custom algorithms or service scripts must also be converted from HTTP to HTTPs.

For customers using the Multi-Purpose Listener (MPL), the use of secure protocol should

alter

the

$JAVA_HOME\jre\lib\security\java.security

(or

%JAVA_HOME%/jre/lib/security/java.security on Windows) file to enable

SSL support. Modify

the

WLPORT

entry

in

the

$SPLEBASE\splapp\mpl\MPLParamaterInfo.xml

 

(or

%SPLEBASE%/splapp/mpl/MPLParamaterInfo.xml on Windows) to use the SSL Port.

TABLE 4 – JAVA SSL CONFIGURATION

VENDOR

CHANGES

Oracle WebLogic

Refer to

IBM WebSphere

ssl.SocketFactory.provider=com.ibm.jsse2.SSLSocketFactoryImpl

ssl.ServerSocketFactory.provider=com.ibm.jsse2.SSLServerSocketFactoryImpl

General Best Practices

This section outlines some general practices that have been successfully implemented at various product sites.

Limiting production Access

One of the guiding principles at successful sites is that production access is restricted to the processing necessary to run their business. This means that other non-mainstream work, such as ad-hoc queries, are either very limited or NOT performed on production at all. This may sound logical but a few sites

Oracle Utilities Application Framework - Technical Best Practices

have allowed access to production from inappropriate sources, which has had an adverse impact on performance.

For example, it is not appropriate to allow people access to the production database through ad-hoc query tools (i.e. such as DB2 Control Center, SQL Developer, SQL*Plus etc). The freestyle nature of these tools can allow a single user to wreak havoc on performance with a single inefficient SQL statement.

The database is not optimized for such unexpected traffic. Removal of this potentially inefficient access can typically, improve performance.

Regular Collection and Reporting of Performance Metrics

One of the major practices that successful customers perform is the regular collection of performance statistics, analysis of the statistics and reporting pertinent information to relevant parties within the organization as well as Oracle. Collection of such information can help identify bottlenecks and badly performing transactions, as well as help understand how the product is being used at your site. They offer proof of both good and bad performance and typically allow sites to gauge the extent of any issue.

The product contains a number of collection points in the architecture that are useful for real time and offline collection of performance related data. Information on the collection points are documented in the whitepaper Performance Troubleshooting Guides. Using the guide, decide which statistics are important to the various stakeholders at your site, decide the frequency of collection and format of any output to be provided. Use your sites Service Level Agreement (SLA), if it exists, for guidance on what to report.

Respecting Record Ownership

In Oracle Utilities Application Framework V2.x and above, the concept of ownership of records was introduced. A data element was added to data to indicate the owner of the object and is used to protect key data supplied with the product from alteration or deletion. It is used by the online system to prevent the online users accidentally causing critical data failures. The owner is also used by the upgrade tools protect the data from deletion.

The ownership of the record determines what you can do with that record:

Framework - If the record is owned by Framework then implementation teams cannot alter or delete the record from the database as it is deemed critical to the running of the Framework. This is usually meta-data deemed important by the Framework team. For example the user SYSUSER is owned by the Framework.

Product - If the record is owned by the product (denoted by the product name or Bas e) then some changes are permitted but deletion is not permitted as the record as it is necessary for the operation of the product. The amount of change will vary according to the object definition.

Oracle Utilities Application Framework - Technical Best Practices

Customer Modification - If the record is owned by Customer Modification then the implementation has added the record. The implementation can change and delete the record (if it is allowed by the business rules).

Basically you can only delete records that are owned by Customer Modification.

It is possible to alter or delete the records at the database level, if permitted by database permissions, but doing this will produce unexpected results so respect the ownership of the records.

Backup of Logs

By default product removes existing log files from $SPLSYSTEMLOGS (or %SPLSYSTEMLOGS% for Windows platforms) upon restart. This is the default behavior of the product but may not be desirable for effective analysis as the logs disappear.

To override this behavior the following needs to be done:

A directory needs to be created to house the log files. Most sites create a common directory for all environments on a machine. The size allocation of that directory will depend on how long you wish to retain the log files. It is generally recommended that logs be retained for post analysis and then archived (according to site standards) after processing to keep this directory relevant. Typically customers create a subdirectory under <SPLAPP> to hold the files.

Set the SPLBCKLOGDIR environment variable in the .profile (for all environments) or $SPLEBASE/scripts/cmenv.sh (for individual environments) to the location you specified in the first step. For Windows platforms then the environment can be set in your Windows profile or using %SPLEBASE%/scripts/cmenv.cmd.

Logs will be backed up at the location specified in the format

<datetime>.<environment>.<filename> where <datetime> is the date and

time of the restart, <environment> is the id of the environment (taken from the SPLENVIRON environment variable) and <filename> is the original filename of the log.

Once the logs have been saved you must use log retention principles to manage the logs under SPLBCKLOGDIR to meet your sites standards. Most sites archive the logs to tape or simply compress them after post processing the log files (See Post Process Logs for more details on post processing).

Post Process Logs

The logs written by the various components of product provide valuable performance and diagnostic information. Some sites have designed and developed methods to post process those logs to extract important information and then report on it to relevant parties.

If the logs are retained by your site (see Backup of Logs for details on this process), the consider post processing the logs on a regular basis before they are archived or deleted permanently. One approach is to extract that information from the logs and loading the extracted data into some analysis repository for regular and trend reporting. The diagram below illustrates the process.

Oracle Utilities Application Framework - Technical Best Practices

Oracle Utilities Application Framework - Technical Best Practices Figure 2 – Post Processing Logs Details of

Figure 2 – Post Processing Logs

Details of the logs written by the product are documented in the Performance Troubleshooting Guide. Use these guides to determine what data to extract from the logs for post processing.

Check Logs For Errors

One of the most important tasks for a site is to regularly track errors output into logs. Whenever an error occurs in product, an error record is written to the appropriate log for analysis. Some sites regularly check these logs for these errors and using the information in the log, address the error condition.

Oracle Utilities Application Framework - Technical Best Practices Figure 2 – Post Processing Logs Details of

Figure 3 – Filtering Logs

Viewing and checking for errors on a regular basis to quickly reduce the amount of error that may occur can detect trends and common problems. The Performance Troubleshooting Guide outlines the logs and error conditions contained within those logs.

Optimize Operating System Settings

One of the most important configuration settings for product is the operating system itself. The Installation Guide provided with your product highlights the fact that the operating system parameters MUST be set to optimal values for the product to perform optimally. Some sites have experienced large improvements in performance by heeding this advice. Sites that have decided to ignore this advice have experience bad performance till the settings were corrected.

Typically, the optimization of the operating system is performed during the implementation and uses the following principles:

Oracle Utilities Application Framework - Technical Best Practices

The value of an individual operating system setting is the maximum value of any product on that machine. For example, typically if Oracle or DB2 is contained on a machine, the values for those products are used. The settings used in this way are usually are sufficient for the other products on that machine.

If the machine is dedicated for a particular product or tier, then refer to the documentation in the installation guide and the particular vendor's site for further advice on setting up the operating system in an optimal state.

Optimize connection pools

One of the settings that will affect performance is the size of the connection pools at each layer in the architecture. Insufficient pool sizes can cause unnecessary transaction queues that can cause unnecessary delays. Conversely setting the pool sizes too high can cause higher than usual resource usage on a machine also causing adverse performance. So a balance needs to be struck for optimization.

During the implementation the size of the connection pools is determined and configured (with relevant growth tolerances) depending on the usage patterns and expected peak/normal traffic levels. The goal, typically, is to have enough connections available at normal traffic levels to minimize queuing and also have the right tolerances to cater for any expected peak periods. Therefore, it is recommended:

Set the number of initial connections to the normal number of connections expected. Remember this is not the number of users that are connecting but the expected number of concurrent connections under normal load.

Set the tolerances for pool growth (usually a maximum pool size and a connection increment) to the peak load expected at any time. This tolerance will have to be tracked to determine the optimal level. Do not be tempted to set this to a very large value as memory and network bandwidth calculations are usually dependant on the values specified and wastage of resources needs to be minimized.

The product has up to three connection pools to configure:

Client connections and Business Server connections – These are the number of active connections supported on the Web Application Server from the client machines. Remember that in an OLTP product (such as product) the number of connections allocated is always less that the number of users on the system. It needs to be sufficient to cater for the number of actively running transactions at any given point of time. In Oracle Utilities Application Framework V2.2 and above, it is possible to separate the Web Application Server and Business Application Server. If this configuration is used then it is recommended that the Business Application Server connection pools be set to the same values as the Web Application Server connection pools. Refer to Configuring the Client Thread Pool Size for more information about pool sizing.

Oracle Utilities Application Framework - Technical Best Practices

Note: The Client connections and Business Sever connections are managed within the J2EE Web Application Server software.

Database connections – These are the number of pooled connections to the database. The Framework holds these connections open so that the overhead of opening and closing connections is minimized. For Version 2.x of the product, the number of connections allocated is dictated in each individual web applications hibernate.properties file using c3p0 connection pool (In Oracle Utilities Application Framework V4.0 the connection pooling is now handled by Universal Connection Pool (UCP)).

The figure below illustrates the connection pools available for each version of the Oracle Utilities Application Framework:

Client Client Connections Web Application Server Business Application Server Database Connections (via Hibernate/c3p0) Database Server V2.x
Client
Client Connections
Web Application
Server
Business
Application
Server
Database Connections
(via Hibernate/c3p0)
Database
Server
V2.x

Client

 

Client Connections

 

Web Application

 

Server

Business Server

 
 

Connections

Business

 

Application

Server

Database Connections (via Hibernate/c3p0/UCP)

 

Database

 

Server

V2.2 and above

Figure 4 – Connection Pools by version of the Oracle Utilities Application Framework

Refer to the whitepaper Server Administration Guides (also known as Operations and Configuration Guides) provided with your product for advice on the configuration and monitoring of the connection pools.

Oracle Utilities Application Framework - Technical Best Practices

Read the available manuals

Note: Due to the ISV licensing of Web Application Servers, there may not be as much details as other platforms. Refer to the vendor’s site for more detailed information.

The Oracle Utilities Application Framework product includes a set of documentation that should be downloaded with the software and read as part of the implementation and support of product. The following technical documentation is available on the distribution web:

Installation Guide – Installation documentation for the base product including supported platforms and required patches.

Server Administration Guide/Operations And Configuration Guide – Documentation on how to configure and operate the server components of the product.

Developer documentation – This is detailed documentation on the customization aspects of the implementation including standards for implementations. This includes:

Application Logs - List of logs produced by the development and deployment process.

COBOL Programming Standards - List of naming conventions and programming advice used for COBOL modules including algorithms, Maintenance Objects etc.

Note: COBOL documentation only applies to products that have support for COBOL based customizations.

Java Programming Standards - List of naming conventions and programming advice used for java modules including algorithms, Maintenance Objects etc

Java Annotations – Brief overview of the product annotation classes available to the java developer.

Public API – Overview of the API available to the java programmer.

SQL Programming Standards - Documentation of the SQL standards used in

product. HSQL Programming standards - Documentation of the Hibernate SQL standards

used in product. User Interface Design Standards - Documentation about the User Interface standards used by product

Database Design Standards - Documentation of the database standards employed in product including naming conventions for tables and columns and layout advice.

System

Table

Guide - Documentation of the meta

data tables used

in the

development process. Utilities - Documentation of the other development utilities used by the SDK.

Oracle Utilities Application Framework - Technical Best Practices

Development Overview - An introduction to the development process and internals of product.

Packaging Utilities - Documentation of the tools provided to package custom builds

Key Generation – Overview of the routines and tables used for generation of random keys in the product.

Application Workbench Overview – An overview of the Application Workbench

component of the SDK. User Guide – A developer’s cookbook and users guide to the SDK.

Utilities Documentation – This is detailed guides to the various tools supplied with product including:

Background Processing – Details of all the background processes available with the product.

Reports – Details of the reporting interface available with the product including installation of the algorithm and configuration of the reporting interface.

CTI/IVR Integration – An overview of the installation, capabilities and configuration of the CTI/IVR integration components delivered with the product.

Framework/System Wide Standards – An overview of the various UI standards employed by the product.

Application Security – An overview of the authorization security model used in the product including guidelines for configuration.

User Interface Tools – An overview of the meta data tools available for the user interface including menus, navigation keys etc.

Zone Configuration – An overview of how to configure the zones and portals supplied with the product.

Database Tools – An overview of the Meta data tools available for maintenance object, table and field definition including auditing.

Algorithms – An overview of all the algorithms supplied with the product.

Scripting – Details of the Business Process Scripting engine supplied with the product including configuration.

Application Viewer – Overview of the maintenance and operation of the Data Dictionary and code view supplied with the product.

XAI – Detailed overview and configuration of the Web Services/XML Application Integration component of the product.

Oracle Utilities Application Framework - Technical Best Practices

LDAP Import – Detailed overview of the LDAP import function supported by the product to synchronize LDAP information with the authorization information stored in the product.

Batch Operations and Configuration Guide - Details of the configuration settings and common operations for the batch component of product.

Technical Documentation Set Whitepapers available

Apart from the product based documentation there are a number of whitepapers that provide specialist and supplemental information for use during and post implementation. The table below lists the current available technical documentation as well as the Knowledge base Id within My Oracle Support where the documentation resides:

TABLE 5 – TECHNICAL WHITE PAPERS

DOC ID

DOCUMENT TITLE

CONTENTS

  • 559880.1 ConfigLab Design Guidelines

A whitepaper outlining how to design, setup and monitor a ConfigLab solution for an implementation. This is a companion document to the Software Configuration Management Series.

  • 560382.1 Performance Troubleshooting Guideline Series

A series of whitepapers outlining the tracking points available in the architecture for performance and a troubleshooting guide based upon common problems.

  • 560401.1 Software Configuration Management Series

This series of documents outlines a set of generic processes (that can be used as part of the site processes) for managing code and data changes. This series includes documents that cover concepts, change management, defect management, release management, version management, distribution of code and data, management of environments and auditing configuration.

  • 773473.1 Oracle Utilities Application Framework Security Overview

A whitepaper outlining the security facilities in the Oracle Utilities Application Framework.

  • 774783.1 LDAP Integration for Oracle Utilities Application Framework based products

A whitepaper outlining the common process for integrating an external LDAP based security repository with the framework.

  • 789060.1 Oracle Utilities Application Framework Integration Overview

A whitepaper outlining all the various common integration techniques used with the product (with case studies).

  • 799912.1 Single Sign On Integration for Oracle Utilities Application Framework based products

A whitepaper outlining a generic process for integrating an SSO product with the Oracle Utilities Application Framework.

  • 807068.1 Oracle Utilities Application Framework Architecture Guidelines

A whitepaper outlining the different variations of architecture that can be considered. Each variation will include advice on configuration and other considerations.

  • 836362.1 Batch Best Practices for Oracle Utilities Application Framework based products

A whitepaper outlining the common and best practices implemented by sites all over the world relating to batch.

  • 856854.1 Technical Best Practices V1 Addendum

Addendum to Technical Best Practices for Oracle Utilities

Application Framework Based Products containing only V1.x

Oracle Utilities Application Framework - Technical Best Practices

DOC ID

DOCUMENT TITLE

CONTENTS

specific advice.

  • 942074.1 XAI Best Practices

This whitepaper outlines the common integration tasks and best practices for the Web Services Integration provided by the Oracle Utilities Application Framework.

  • 970785.1 Oracle Identity Manager Integration Overview

This whitepaper outlines the principals of the prebuilt integration between Oracle Utilities Application Framework Based Products and Oracle Identity Manager used to provision user and user group security information.

  • 1068958.1 Production Environment Configuration Guidelines

This whitepaper outlines common production level settings for Oracle Utilities Application Framework products.

  • 1177265.1 What's New in Oracle Utilities Application Framework V4?

This whitepaper outlines the changes since the V2.2 release of Oracle Utilities Application Framework.

  • 1290700.1 Database Vault Integration

This whitepaper outlines the Database Vault integration available with Oracle Utilities Application Framework V4.1 and above.

  • 1299732.1 BI Publisher Integration Guidelines

This whitepaper outlines some guidelines for integration available with Oracle BI Publisher for reporting.

  • 1308161.1 Oracle SOA Suite Integration

This whitepaper outlines the integration between Oracle SOA Suite and the Oracle Utilities Application Framework.

  • 1308165.1 MPL Best Practices

Addendum to the XAI Best Practices focusing on the Multi- purpose Listener.

  • 1308181.1 Oracle WebLogic JMS Integration

This whitepaper outlines the integration between Oracle WebLogic JMS and the Oracle Utilities Application Framework for Oracle Utilities Application Framework V4.1 and above. These features are also available for Oracle Utilities Application Framework V2.2 via patches.

This documentation is updated regularly with each release of product with new and improved information and advice. Announcements of updates to whitepapers may be tracked via http://blogs.oracle.com/theshortenspot or http://www.twitter.com/theshortenspot.

Implementing Industry Processes

Implementing a product such as product can mean that an IT has to adopt new processes in the company to cater for the new product in their portfolio of applications. This is not unique to product. Any new product that is implemented into an IT portfolio not only requires business process changes but also IT process changes.

In the IT industry at the moment most software application vendors are realizing that implementing a product is not just simply configuration, there are some change management that needs to be performed with the IT group. Luckily the industry has started to adopt some sort of standard framework that helps define an IT business and the processes necessary to run that business. This framework is called the IT Infrastructure Library.

Oracle Utilities Application Framework - Technical Best Practices

IT Infrastructure Library (ITIL) is a set of consistent and comprehensive documentation of best practice for IT Service Management. Used by many hundreds of organizations around the world, a whole ITIL philosophy has grown up around the guidance contained within the ITIL books and the supporting professional qualification scheme. ITIL consists of a series of books giving guidance on the provision of quality IT services, and on the accommodation and environmental facilities needed to support IT. ITIL has been developed in recognition of organizations' growing dependency on IT and embodies best practices for IT Service Management.

The ethos behind the development of ITIL is the recognition that organizations are becoming increasingly dependent on IT in order to satisfy their corporate aims and meet their business needs. This leads to an increased requirement for high quality IT services. ITIL provides the foundation for quality IT Service Management. The widespread adoption of the ITIL guidance has encouraged organizations worldwide, both commercial and non-proprietary, to develop supporting products as part of a shared "ITIL Philosophy".

For more information about ITIL refer to http://www.oracle.com/itil

Using Automated Test Tools

Some of the sites around the world use third party testing tools for performance and regression testing. While product is open in terms of the standard it uses not all test tools are applicable to simulate exact expected traffic. In choosing an automated testing tool that you can use with product the following must be supported:

Support for HTTP – The automated test tool must be able to trap HTTP traffic, as this is the traffic used by the product. If the tool supports HTTPS, and you intend to use the HTTPS protocol, be careful as support for HTTPS varies greatly with most testing tools.

JSP Support – product uses JSP coding to perform most functions. A tool that can leverage this technology will enable screens to be recognized.

Support simulation of IE caching – The product client utilizes the Internet Explorer cache to locally hold an image of the screen for performance reasons. The automated test tools needs to be able to simulate this behavior otherwise results will not reflect reality.

Support Pop up screens – The product utilizes pop up windows for some lists and some searches as well as confirmation and error messages. The automated test tool needs to be able to support the use of these to adequately simulate product transactions.

Valid calls – Ensure that the test tools simulate valid calls to the product. A valid call is a call that the browser user interface issues to the web server or a call that the XAI component will accept. An invalid call that is sent by a test tool to the product may result in unpredictable results. Check EVERY call is valid (try them with browser user interface to verify the call) and fix any invalid calls.

The following products have been used with product at customer sites:

Oracle Utilities Application Framework - Technical Best Practices

Custom Environment Variables or JAR files

Implementations of the product sometimes use third party java classes or third party tools to perform specialist functions. Sometimes these tools require additional configurations settings that can be integrated in the infrastructure provided by the product. For example, if you use a third party jar file to be called by the product then you will need to add it to the CLASSPATH to ensure it is picked up by the runtime.

Luckily, there is a feature that allows custom environment variables settings and other commands to be run after the splenviron.sh script (or splenviron.cmd on Windows) has been executed.

To do this create a cmenv.sh script (or cmenv.cmd on Windows) in the $SPLEBASE/scripts directory (%SPLEBASE%\scripts on Windows) with the commands you want to execute. For example, if an implementation used AXIS2 jar files to call web services. Well you place the AXIS2 jar files in a central location (e.g. /axis/lib in this example) and create the cmenv.sh/cmenv.cmd script with the lines:

export CLASSPATH=/axis/lib/axis.jar;$CLASSPATH

or

set CLASSPATH=c:\axis\lib\axis.jar:%CLASSPATH%

When splenviron.sh script (or splenviron.cmd on Windows) runs it will look in the scripts directory for the existence of the cmenv.sh script (or cmenv.cmd on Windows) and executes it.

Additional to this, it is possible to do this WITHOUT adding the cmenv.sh script (or cmenv.cmd on Windows). Set the CMENV environment variable to the location of a script, with the above commands contained, BEFORE running any command and splenviron.sh script (or splenviron.cmd on Windows).

The CMENV facility is

for

global

changes

as

it

applies

across

all environments and the

cmenv.sh/cmenv.cmd solution is per environment. You can use both as CMENV is run first then cmenv.sh/cmenv.cmd.

Note: It is possible, using this technique, to manipulate any environment variable used by the product but this is not recommended.

Oracle Utilities Application Framework - Technical Best Practices

Help and AppViewer can be used standalone

The Help and AppViewer components may be used in standalone mode (a.k.a. offline mode). This can be handy for developers, designers and architects who do wish to access up to date information without the need to connect to a live copy of the product at their site.

Under the splapp/applications directory on V2.x/V4.x or cisdomain/applications on V1.x, there are two directories named help and appViewer. These contain the online help and AppViewer application and data. Copy these directories to your desired target machine (such as a shared drive, web server or your laptop).

Note:

On

some

platforms

the

directories

are

contained

within

WAR

files

named

help.war and

appViewer.war, these will need to be decompressed on the target platform using an appropriate utility such as

jar from the Java SDK or 7Zip (or similar).

To operate the applications in standalone mode you will need to open the following files in your web browser:

appViewer.html – Application Viewer startup file. It is also possible to reconfigure the behavior of the standalone copy by altering the config.xml file located in the config subdirectory of the AppViewer.

SPLHelp.html – Help file located in language subdirectory (ENG = English et al).

Note: The AppViewer and Help applications are only supported in the browsers supported by the product.

Re-Register only when necessary

Note: Not all Oracle Utilities Application Framework products have the supplied ConfigLab or Archiving functionality.

As part of the ConfigLab definition process it is necessary to register the environments to be used by ConfigLab. The registration process creates remote synonyms (the database technology used to achieve this will vary from database type to database type) and an environment reference in the database.

One of the processes that must be performed is that the environments must be re-registered after upgrades (product as well as customization upgrades) as the upgrade may remove or add a new table or view and this needs to be reflected in the synonyms.

The registration process does not need to be executed if the product upgrade or customization upgrade does not add or remove any tables or views. This will save time in the upgrade process. If there are no database changes to reflect then the re-register process will remove all synonyms and rebuild exactly the ones removed. This can be a waste of time. Basically remember only to re-register if there are any database table or view additions or deletions.

The installation procedure creates a number of default userids with default passwords. It is pertinent to reduce security risk by changing the default passwords and reflecting that change in the configuration.

Oracle Utilities Application Framework - Technical Best Practices

Secure default userids

There are a number of default users (and associated default passwords) that are supplied with the installation of product. It is recommended that the default users and their passwords be altered according to the site security standards. The table below lists the default users supplied with the products:

TABLE 6 – COMMON SUPPLIED USER CREDENTIALS

USERID

SCOPE

COMMENTS

SPLADM

Default Database Administrator account.

Owner of the database. Database Administrators are the only valid users of this account. This account is created during the database creation process.

SPLREAD

Default Reporting account

This account is used by Archiving, ConfigLab and Reporting. Only available on Oracle database installations. This account is created during the database creation process.

splsys

UNIX administration account

Treat this user as you would treat root or an Administrator account.

SPLUSER

Default Application account

This account is used for all application database access. This account is created during the database creation process.

SYSUSER

Default initial framework user provided with product.

This user needs to be available to add other users. This needs to be defined to the Web Application Server on install. The password will reside in the repository defined in the Web Application Server (usually LDAP).

system

Default User for Web Application server console

This is for Oracle WebLogic implementations only.

WEB

Web Self Service Default User

See SYSUSER

XAI

Default XAI userid (some versions)

See SYSUSER

Note: There are other userids supplied by products used by product, refer to the documentation for the products on these users.

Consider different userids for different modes of access

Note: It is not possible to configure product to use different database accounts for access. All modes of access will share the relevant pool of database connections as a single database user (usually SPLUSER).

In Oracle Utilities Application Framework version 4.0.1 and above the actual end userid is available as the CLIENT_IDENTIFIER on Oracle database sessions.

Oracle Utilities Application Framework - Technical Best Practices Secure default userids There are a number of

By default the application is configured to either use SYSUSER, SPL or XAI to access the product for online, XAI and in background processing. This means any audit or staging records are associated with a common userid. Some implementations have created additional userids to use as a filter for reporting, traceability and auditing purposes. The following guidelines may be used in this area:

Oracle Utilities Application Framework - Technical Best Practices

Create a different userid for XAI transactions. This allows tracking of XAI within the architecture. It is also possible to assign each transaction in XAI a different userid, as it is passed as part of the transaction but usually most customers consider this overkill.

Create a different userid for each background interface. This allows security and traceability to be tracked at a lower level.

Create a generic userid for mainstream background processes. This allows tracking of online versus batch initiation of processes (especially To Do, Case and Customer Contact processing).

Note: Remember that any product user must be defined to the product as well as the authentication repository.

Don’t double audit

The product has an auditing facility that is soft configured. The facility can be enabled by configuring the auditing parameters (location of the audit data, audit rules etc) against the meta data definitions of the tables. This ensures that any online or XAI updates are audited according to those rules. Auditing is used to track online changes to critical entities.

The financial component of product already has a separate auditing facility, as all customers generally require it. Any changes to financial information such as payments, adjustments, bills etc are registered in the Financial Transaction tables. Therefore enabling auditing on those entities is not required and constitutes double auditing (i.e. auditing information is stored in two places).

While the impact of the double auditing may be storage related, enabling auditing on bills, for example, can have a performance hit on online bills. Customers with large numbers of bill segments per bill (i.e. several hundred) have experienced negative performance impact during online billing when double auditing is enabled on financial entities. This does not affect batch performance as auditing is not used in batch.

Use Identical Machines

The flexibility of the technology used by the product allows the ability to mix-and-match different hardware for a configuration. While this may be attractive and allow for some innovative solutions, it makes overall manageability and operations harder. Hence, it should be avoided.

Having identical hardware allows for ease of stocking spare parts, better reproducibility of problems (both software and hardware), and reduces the per platform testing cost. This cost, in many cases, will surpass the savings from reusing existing disparate hardware.

Regularly restart machines

It is generally a good practice to restart servers periodically. This recovers slow memory leaks, temporary disk space build-up, or other hidden problems that may manifest themselves only when the server is up for such a long duration. This is a simple way to avoid unexpected or unexplained failures.

Oracle Utilities Application Framework - Technical Best Practices

Most hardware vendors have recommendations on optimal time intervals to restart machines. Some vendors even "force" the issue for maintenance reasons. Check with your vendor for specifics for your platform.

Avoid using direct SQL to manipulate data

Note: Issuing SQL data manipulation language (DML) statements other then SELECT statements directly against base tables can cause data integrity to be compromised and can invalidate your product warranty. All data update access should be through maintenance objects that ensure data integrity is maintained.

Unless the outcome can be verified as correct, you should not use ANY direct SQL statement against product database as you may corrupt the data and prevent the product from operating correctly.

All the data maintenance and data access in the product is located in the Maintenance Objects. The Maintenance Objects validate ALL changes against your sites business rules and the rules built into the product. If you are using the objects to manipulate the data then integrity is guaranteed as:

All the validations including business rules, calculations and referential integrity are contained within the Maintenance Objects.

The Maintenance Object performs a commit when all validations are successful. If any validation is failed the whole object is rolled back to a consistent state. In background processing, a commit is performed after a number of Maintenance Objects have been processed (known as the Commit Interval). At that point the last commit point is registered on the Batch Control for restarting purposes. If the background process fails between commit points, the database is rolled back to the last commit point.

All access modes (online, XAI, background processing) from code supplied with the product use the Maintenance Objects for processing. This means that integrity is guaranteed across all modes. Any customizations (algorithms etc) using the Oracle Utilities SDK will should be using the Maintenance Objects. Using incorrect SQL may violate any of the validations and even make the system unusable.

If you have to manipulate data within the product, use one or more, of the following provided methods:

The browser user interface.

XML Application Integration.

Conversion Toolkit.

Software Development Kit.

Minimize Portal Zones not used

Oracle Utilities Application Framework - Technical Best Practices

In the Oracle Utilities Application Framework, portals were introduced to all sites to decide what zones and their sequence should for different user groups. For performance reasons, it is recommended that you configure portal preferences to collapse zones that are not needed every time a portal is displayed.

The system does not perform the processing necessary to build collapsed zones until a user expands the zone, so configuring them as initially collapsed improves response times. This is especially relevant for the To Do zones that may take a while if the number of To Do records is excessive.

Routine Tasks for Operations

After the implementation of the product has been completed there is a common set of tasks that IT groups perform to maintain the system. The table below lists these tasks:

TABLE 7 – ROUTINE TASKS

TASK

COMMENTS

Perform Backups

Perform the backup of the database and file system using the site procedures and the tools designated for your site.

Post Process Logs

Check the log files for any error conditions that may need to be addressed. Refer to Post Process Logs and Check Logs For Errors for more details.

Process Performance Data

Collate and process day's performance data to assess against any Server Level targets. Identify any badly performing transactions.

Perform Batch Schedule

Execute the batch schedule agreed for your site. This will include overnight, daily, hourly and ad- hoc background processes.

Rebuild Statistics

DB2 and Oracle require the database statistics for the product schemas to be rebuilt on a regular basis so that the access to the SQL is optimized. At DB2 sites, a rebind is also required to reflect the changes in the execution plan/packages.

File Cleanup

On a regular basis, the output files from the background processes and logs will need to be archived and removed to minimize disk space usage.

Archive Data not required

The Oracle Utilities Application Framework features an inbuilt archiving facility that can transfer transaction data not considered required for online processing to another environment, to a file or simply deleted. Refer to Archiving for more details.

Run Cleanup Batch Jobs

There are a number of background processes that remove staging records that have been already successfully processed. Refer to "Removal of Staging Records" for more details.

Note: The tasks listed above do not constitute a comprehensive list of what needs to be performed. During the implementation you will decide what additionally needs to be done for your site.

Typical Business Day

One of the patterns experienced at sites is the notion of a common definition of a business day. Typically during the implementation the business day is defined for planning purposes. It defines when the call center is at peak or non-peak, background processing can be performed and when backups are performed during the business day.

The figure below illustrates a simplified model of a typical customer business day:

Oracle Utilities Application Framework - Technical Best Practices

Monitoring Monitoring Batch Overnight Batch Daily/Ad-hoc/Hourly Batch Overnight Batch Backups Backup Backup Online Off Peak Peak
Monitoring
Monitoring
Batch
Overnight Batch
Daily/Ad-hoc/Hourly Batch
Overnight Batch
Backups
Backup
Backup
Online
Off Peak
Peak
Off Peak
0
4
8
12
16
20
0
Figure 5 – Example Typical Business Day

Note: The above diagram is for illustrative purposes only and could vary for your site.

Typically a business day contains the following elements:

There is a peak online period where the majority of call center business is performed. Typically this is performed in business hours varying according to local custom.

There is a call center off peak period where the volume of call center traffic is greatly reduced compared to the peak period. Typically in call centers, which operate 24x7, this represents overnight and weekends. At this time the call center is reduced in size (usually a skeleton shift). Some sites do not operate in non-peak periods and rely on automated technology (e.g. IVR) to process transactions such as payments etc.

Backups are either performed at the start of the peak period or the end of the peak period. The decision is based upon risk around failure of the background processing and its risk to the impact of online processing. The product specific background processes can be run anytime but avoiding them during peak time will maximize the available computing resources to the successful processing of call center transactions. The backup at the end of the peak period is the most common patterns amongst product customers.

Background processes are run at both peak and off peak times. The majority of the background processing is performed at off-peak times to maximize the computing resources to the successful completion of the background processing. The background processing that is run at off peak times is usually to check ongoing call center transactions for adherence to business rules and process interface transactions ready for overnight processing.

Monitoring is performed throughout both peak and off peak times. The monitoring regime used may use manual as well as automated tools and utilities to monitor compliance against agreed service levels. Any non-compliance is tracked and resolved.

The definition of the business day for you site is crucial to schedule background processing and set monitoring regimes appropriate for the traffic levels expected.

Login Id versus Userid

Note: This facility is available in Oracle Utilities Application Framework V4 and above only.

Oracle Utilities Application Framework - Technical Best Practices Monitoring Monitoring Batch Overnight Batch Daily/Ad-hoc/Hourly Batch Overnight

In the past releases of the Oracle Utilities Application Framework the userid that could be used to login was restricted to 8 characters in length. In Oracle Utilities Application Framework V4 and above, it is possible to use a user identifier of up to 256 characters in length.

Oracle Utilities Application Framework - Technical Best Practices

In Oracle Utilities Application Framework V4 and above the concept of a Login Id is supported. This attribute is the used by the framework to authenticate the user. For backward compatibility the 8 character userid field is still used for auditing purposes internally. Therefore both Userid and Login Id should be populated. They can be different or the same values.

The Login Id can be set manually, via Oracle Identity Manager or set in a class extension to auto generate a value.

Oracle Utilities Application Framework - Technical Best Practices In Oracle Utilities Application Framework V4 and above
Oracle Utilities Application Framework - Technical Best Practices In Oracle Utilities Application Framework V4 and above

Figure 6 – Login Id

Hardware Architecture Best Practices

Note: There is a more detailed discussion of effective architectures in the Oracle Utilities Application Framework Architecture Guidelines whitepaper. Refer to that document for further advice.

The product can be run on various combinations of hardware based architectures. When choosing an architecture that is best suited to a site there are a number of key factors that most be considered:

Cost – When deciding a preferred architecture, the total cost of the machine(s) and infrastructure needs to be taken into a consideration. This should the ongoing costs of maintenance as well as power costs.

IT Maintenance Effort – When deciding a preferred architecture, the manual or automated effort in maintaining the hardware in that architecture needs to be factored into the solution.

Availability – One of the chief motivations for settling on a multi-machine architecture is requiring the architecture to support high availability. When deciding a preferred architecture, the tolerance and cost of availability needs to be factored into the solution.

Single Server Architecture

If the site is cost sensitive and/or the availability requirements allows it, then having all the architecture on a single machine is appropriate. This is known as the single server architecture. This configuration is popular with some sites as:

The cost of the hardware can be minimal (or least very cost effective).

Maintenance costs can be minimized with the minimal hardware.

Virtualization software (typically part of the operating system or third party virtualization software) can be used to partition the machine into virtual machines.

The one issue that makes this solution less than ideal is the risk of unavailability due to hardware failure. Customers that choose this solution, typically address this shortcoming by buying a second

Oracle Utilities Application Framework - Technical Best Practices

machine of similar size and using it for failover, disaster recovery as well as non-production. In essence, if the primary hardware fails then the backup machine assumes the responsibility for production till the hardware fault is resolved. In this case, additional effort is required to keep the secondary machine in synchronization with the primary.

The diagram below illustrates the single server architecture:

Oracle Utilities Application Framework - Technical Best Practices machine of similar size and using it for
Oracle Utilities Application Framework - Technical Best Practices machine of similar size and using it for

Browser Client

Web Application Server Business Application Server Database Server
Web Application Server
Business Application Server
Database Server

Figure 7 – Example Single Server Architecture

Simple Multi-tier Architecture

One of the variations on the single server architecture is the "simple multi-tier architecture". In this hardware architecture, the database server and Web Application Server/Business Application Server are separated on different machines. For product V1.x customers, you can also separate the Web Application and Business Application Servers.

This is chosen by customers who want to optimize the hardware for the particular tier (settings and size of machine) and therefore separate the maintenance efforts for each server. For example, Database Administrators need only access the Database Server to perform their duties and set the operating system parameters optimized for the database.

Unfortunately the solution can have a higher cost than the single server solution and still does not address the unavailability of any machine in the architecture. Customers that have used this model adopt a similar solution to the single server architecture (duplicate secondary machines at a secondary site) but also have the option of having both machines in the architecture being the same size and shifting the roles when availability is compromised. For example, if the database server fails, the Web Application Server can be configured to act as a combination of the Database Server and Web Application Server.

The figure below illustrates the Simple Multi-Tier Architecture:

Oracle Utilities Application Framework - Technical Best Practices

Oracle Utilities Application Framework - Technical Best Practices Browser Client Browser Client Web Application Server Web
Oracle Utilities Application Framework - Technical Best Practices Browser Client Browser Client Web Application Server Web
Oracle Utilities Application Framework - Technical Best Practices Browser Client Browser Client Web Application Server Web

Browser Client

Oracle Utilities Application Framework - Technical Best Practices Browser Client Browser Client Web Application Server Web
Oracle Utilities Application Framework - Technical Best Practices Browser Client Browser Client Web Application Server Web
Oracle Utilities Application Framework - Technical Best Practices Browser Client Browser Client Web Application Server Web
Oracle Utilities Application Framework - Technical Best Practices Browser Client Browser Client Web Application Server Web

Browser Client

Web Application Server
Web Application Server
Web Application Server Business Application Server
Web Application Server
Business Application Server
Business Application Server Database Server Database Server
Business Application Server
Database Server
Database Server

Figure 8 – Variations of the Single Multi-Tier Architecture

Machines in this architecture can be the same size or different sizes depending on the cost/benefits of the various variations. Typically customers use a smaller machine for the Web Application Server as compared with the database server.

Multiple Web Application Servers

To support higher availability for the product, some sites consider having multiple Web Application servers. This allows online users to be spread across machines and in the case of a failure be diverted to the machine that is available. To achieve this, the site must use a load balancer (see "Load balancers" discussion later in this document). At the time of failover the load balancer will redirect traffic to the available server. This is made possible as the product is stateless.

The Web Applications Servers are either clustered or managed. Refer to the discussion in the Clustering or Managed? section of this document for advice.

This architecture is quite common as it represents flexibility as one of the Web Application Servers can be dedicated to batch processing in non-business hours making the architecture more cost effective. Typically the Web Application Server software is shutdown to allow batch processing to use the full resources of the machine while allowing users (usually a small subset) to process online transactions.

The only drawbacks with this solution are a potential higher cost than a multi-tier solution and the potential impact of database unavailability. Customers that use this architecture overcome the potential unavailability of the database by either using a secondary site to act as the failover or using one of the Web Applications in a failover database server role. The latter is less common, as most customers find it more complex to configure, but is possible with this is a possibility with this architecture.

The figure below illustrates the Multi-Tier Architecture:

Oracle Utilities Application Framework - Technical Best Practices

Oracle Utilities Application Framework - Technical Best Practices Browser Client Load Balancer Web Application Server Web
Oracle Utilities Application Framework - Technical Best Practices Browser Client Load Balancer Web Application Server Web

Browser Client

Load Balancer Web Application Server Web Application Server Business Application Server Database Server
Load Balancer
Web Application Server
Web Application Server
Business Application Server
Database Server

Figure 9 – Example Multi-Application Server Architecture

Machines in this architecture can be the same size or different sizes depending on the cost/benefits of the various variations. Typically customers use a smaller machine for the Web Application Server as compared with the database server.

High Availability Architecture

The most hardware intense solution is where all the tiers in the architecture have multiple machines for high availability and distribution of traffic. The solution can vary (number of machines etc) but have the following common attributes:

There is no single point of failure. There is redundancy at all levels of the architecture. This excludes redundancy in the network itself, though this is typically out of scope for most implementations.

The number of servers will depend on segmentation of the traffic between call centers, non- call centers, interfaces and batch processing. It is possible to reuse existing servers or setup dedicated servers for different types of traffic.

Availability can be managed with either hardware based solutions, software based solutions or a valid combination of both.

The number of users will dictate the number of machine to some extent. Experience has

shown, that a large number

of users tend

to be better served, from a performance and

Oracle Utilities Application Framework - Technical Best Practices

availability point of view, by multiple machines. Refer to the What is the number of Web Application instances do I need? for a discussion on this topic.

The Web Applications Servers are either clustered or managed. Refer to the discussion in the Clustering or Managed? section of this document for advice.

Database clustering is typically handled by the clustering or grid support supplied with the database management system.

This solution represents the highest cost hardware from both hardware and a maintenance perspective. Historically customers with large volumes of data or specific high availability requirements have used this solution successfully.

The figure below illustrates the High Availability Architecture:

Oracle Utilities Application Framework - Technical Best Practices availability point of view, by multiple machines. Refer
Oracle Utilities Application Framework - Technical Best Practices availability point of view, by multiple machines. Refer
  • Load Balancer

Browser Client

Oracle Utilities Application Framework - Technical Best Practices availability point of view, by multiple machines. Refer
Web Application Server
Web Application Server
Web Application Server
Web Application Server
Oracle Utilities Application Framework - Technical Best Practices availability point of view, by multiple machines. Refer
Oracle Utilities Application Framework - Technical Best Practices availability point of view, by multiple machines. Refer
  • Load Balancer

Oracle Utilities Application Framework - Technical Best Practices availability point of view, by multiple machines. Refer
Business Application Server Business Application Server
Business Application Server
Business Application Server
Oracle Utilities Application Framework - Technical Best Practices availability point of view, by multiple machines. Refer
Database Server/Cluster

Database Server/Cluster

Figure 10 – Example High Availability Server Architecture

Failover Best Practices

Failover occurs when a server in your architecture becomes unavailable due to hardware or software failure. Immediately after the failure the active components architecture route the transactions around the unavailable component to an alternative or secondary component (on another site) to maintain a level

Oracle Utilities Application Framework - Technical Best Practices

of availability. This routing can be done automatically through the use of high availability software/hardware or manually by operators.

The Oracle Utilities product architecture supports failover at all tiers of the architecture, using either hardware or software based solutions. Failover solutions can be varied but a few principles have been adopted successfully by existing customers:

Failover solutions that are automated are preferable to manual intervention. Depending on the hardware architecture used the failover capability can be automated.

Availability goals play a big part in the extent of a failover solution. Sites with high availability targets tend to favor more expensive, comprehensive hardware and software solutions. Sites with lower availability (or no goals) tend to use manual processes to handle failures.

Failover is built into the software used by the products (though it may entail an additional license from the relevant vendor). For example, Web Application Server vendors have inbuilt failover capabilities including load balancing, which is popular with customers.

Hardware vendors will have failover capabilities at the hardware or operating system level. In some cases, it is an option offered as part of the hardware. Sites use the hardware solution in combination with a software based solution to offer protection at the hardware level. In this case, the hardware solution will detect the failure of the hardware and work in conjunction with the software solution to route the traffic around the unavailable component.

Failover is made easier to implement for the product as the Web Application is stateless. The users only need connection to the server while they are actively sending or receiving data from the server. While they are inputting data and talking on the phone they are not consuming resources on the machine. For each transaction the infrastructure routes the calls across the active components of the architecture.

At the database level the common failover facility used is the facility provided by the database vendor. For example, Oracle database customers typically implement RAC. Failover configuration at the database is the least used by existing sites, as the cost of having additional hardware is usually prohibitive (or at least not cost configurable).

Sites wanting to have failover and disaster recovery but cannot afford both consider a solution which combines both. In this case, the disaster recovery configuration is used as a failover for non-disasters.

For any failover solution to be effective, the site typically analyses all the potential areas of failure in their architecture and configures the hardware and software to cover that eventuality. In some cases, sites have chosen NOT to cover eventualities of extremely low probability. Using hardware Mean Time between Failure (MTBF) values from hardware vendors can assist in this decision.

When designing a failover solution then the following considerations are important:

Determine what the availability goals are for your site.

Oracle Utilities Application Framework - Technical Best Practices

Determine the inbuilt failover capabilities of the hardware and software that your site is using. This may reduce the cost of implementing a failover solution if it is already in place.

List all the components that need to be covered by a failover solution. Review the list to ensure all aspects of "what can fail?" are covered.

Design your failover solution with all the above information in mind that you can automate (within reason) for your site. Ensure the solution is simple and reuses already available infrastructure to save costs.

Commonly sites use the following failover techniques in the architecture:

TABLE 8 – COMMONLY USED FAILOVER TECHNIQUES

TIER

COMMON FAILOVER SOLUTION

Network

Load Balancer (hardware for large numbers of users; software based for others). Consider redundant load balancers for "no single point of failure" requirements.

Web Application Server/ Business Application Server

Use inbuilt clustering/failover facilities unless load balancer is doing this. Consider hardware solutions for batch or interface servers.

Database Server

Use inbuilt failover facilities in database unless hardware solution is more cost effective.

Online and Batch tracing and Support Utilities

Oracle Utilities Application Framework - Technical Best Practices • Determine the inbuilt failover capabilities of theMy Oracle Support Doc ID 1206793.1 (Master Note for Oracle Utilities Framework Products - Online and Batch tracing and Support Utilities) for details and training on using these utilities to provide critical information to help expedite support requests. General Troubleshooting Techniques Whilst the troubleshooting features of the product are documented in detail in the online help, Performance Troubleshooting Guides and other manuals there are a number of techniques and guidelines that can be used to help identify problems: • Check the logs in the right order – The log files are usually the best spot to look for errors as any error is automatically logged to then by the product. The most efficient method is to look for the logs from the bottom up as if the error appears in the lower ranks of the architecture that is more likely where the error occurred 5 . Typically you look for records of type ERROR in the following logs (located in $SPLEBASE/logs/system ): TABLE 9 – COMMON LOG FILES The theory is that the first place the error occurs is the most likely candidate tier. 41 " id="pdf-obj-41-59" src="pdf-obj-41-59.jpg">

The Oracle Utilities Application Framework provides a set of utilities to aid in capturing information for support. Refer to My Oracle Support Doc ID 1206793.1 (Master Note for Oracle Utilities Framework Products - Online and Batch tracing and Support Utilities) for details and training on using these utilities to provide critical information to help expedite support requests.

General Troubleshooting Techniques

Oracle Utilities Application Framework - Technical Best Practices • Determine the inbuilt failover capabilities of theMy Oracle Support Doc ID 1206793.1 (Master Note for Oracle Utilities Framework Products - Online and Batch tracing and Support Utilities) for details and training on using these utilities to provide critical information to help expedite support requests. General Troubleshooting Techniques Whilst the troubleshooting features of the product are documented in detail in the online help, Performance Troubleshooting Guides and other manuals there are a number of techniques and guidelines that can be used to help identify problems: • Check the logs in the right order – The log files are usually the best spot to look for errors as any error is automatically logged to then by the product. The most efficient method is to look for the logs from the bottom up as if the error appears in the lower ranks of the architecture that is more likely where the error occurred 5 . Typically you look for records of type ERROR in the following logs (located in $SPLEBASE/logs/system ): TABLE 9 – COMMON LOG FILES The theory is that the first place the error occurs is the most likely candidate tier. 41 " id="pdf-obj-41-69" src="pdf-obj-41-69.jpg">

Whilst the troubleshooting features of the product are documented in detail in the online help, Performance Troubleshooting Guides and other manuals there are a number of techniques and guidelines that can be used to help identify problems:

Check the logs in the right order – The log files are usually the best spot to look for errors as any error is automatically logged to then by the product. The most efficient method is to look for the logs from the bottom up as if the error appears in the lower ranks of the architecture that is more likely where the error occurred 5 . Typically you look for records of type ERROR in the following logs (located in $SPLEBASE/logs/system):

TABLE 9 – COMMON LOG FILES

5 The theory is that the first place the error occurs is the most likely candidate tier.

Oracle Utilities Application Framework - Technical Best Practices

LOG FILES

COMMENTS

spl_service.log

Business Application Server log. In some versions of the Oracle Utilities Application Framework this log does not exist as it is included in the spl_web.log. Errors in here can be service or database related.

spl_xai.log

Web Services Integration also known as XML Application Integration (XAI) log. This log file is exclusively used for the XAI Servlet. More detail can exist in the xai.trc file if tracing is enabled.

spl_web.log

Web Application Server log. This is typically where errors from the browser interface are logged. If errors are repeated from the spl_service.log then the issue is not in the Web Application Server software but in the Business Application or below.

 

Note: There are other logs that are related to the J2EE Web Application Server used that exist in this directory or under the location specified in the J2EE Web Application Server

First error message is usually the right one – When an error occurs in the product, it can cause other errors. Usually the first occurrence of any error is usually the root cause. This is more apparent when a low level error occurs which ripples across other processes. For example, if the database credentials are incorrect then the first error will be that the product cannot connect to the database but other errors in the product will appear as meta data cannot be loaded into various components. In this case fixing the database error will correct the other errors as well.

Not all errors are in fact errors – The product will issue errors if components are missing but are able to overcome this issue. For example, if meta-data is missing the system may resort to using default values. In most cases this means the product can operate without incident but the cause should be resolved to ensure correct behavior.

Note: In some versions, such errors are reported as a WARNING rather than an ERROR.

Tracing can help find the issue – The products includes trace facilities that can be enabled to help resolve the error. This information is logged to the logs above (and other server logs) that can be used for diagnosis as well as for support calls. Refer to Online and Batch tracing and Support Utilities for more information about these tools.

There are usually a common set of candidates – When an error occurs there are a number of typical candidates for causing issues:

Running out of resources – The product uses the resources allocated to it that are available on the machine. If some capacity is reached in the physical machine (memory or disk space are typical resource constraints) or logical, via configuration, such as JVM memory allocations, then the product will report a resource issue. In some cases, the product will directly report the problem in the logs but in some case it will be indirectly. For example, if the disk space is limited then a log may not be written which can cause issues.

Incorrect configuration – If the product configuration files or internal configuration are incorrect for any reason, they can cause errors. A common example

Oracle Utilities Application Framework - Technical Best Practices

of this is passwords which either are wrong or have expired. File paths are also typical settings to check.

Missing metadata – The product is meta-data driven. If the metadata is incorrect or missing then the behavior of the product may not be as expected. This can be hard to detect using the usual methods and typically requires functionality testing rather than technical detective work.

Out of date software – All the software used in the solution, whether part of the product or infrastructure, has updates, patches and upgrades to contend with. Upgrading to the latest patch level typically can address most issues.

Refer to the Performance Troubleshooting Guides for more techniques and additional advice.

Data Management Best Practices

Once a product has been put into product one of the issues that needs to be managed is the quantity of data that accumulates over time. While storage is relatively cheap, as compared to the past, maintenance of an optimal amount of storage is both cost effective and maintains a stable level of performance.

Data management techniques used with products varies according to the types of data stored within the product.

Respecting Data Diversity

One of the most important considerations for a site is to respect the diversity of the data contained in the product where you are trying to manage the data from. Different types of data require different types of management. Requirements for managing data are typically driven by business practices, industry practices or even government legislation (typically driven by tax requirements).

Products are typically is divided into a number of data types and each of these data types needs to be managed in the database for a varying length of time as the product typically has different uses for them. In most products the data types can be categorized as follows:

TABLE 10 – DATA DIVERSITY TYPES

DATA TYPE

TYPICAL COMPOSITION

TYPICAL MANAGEMENT

Configuration Data (a.k.a. Administration data)

Data driving the configuration of the product (e.g. Menus, rates, security, reference data etc).

Maintained by a subset of individuals. Kept indefinitely and only represents small part of any database.

Master Data

Data pertaining to customers/taxpayers such as personal records, addresses, account information, contracts, etc)

Maintained by end users. Kept indefinitely but can be driven by government legislation such as privacy laws or industry rules.

Transactional Data

Day to day data relating to any interaction or activity against the Master

Data is still active is retained for operational reasons. Historical data is

Oracle Utilities Application Framework - Technical Best Practices

DATA TYPE

TYPICAL COMPOSITION

TYPICAL MANAGEMENT

Data (e.g. Bills, Cases, payments, contacts etc).

deleted or archived according to business rules or government legislation.

The table above illustrates the various differences between the types of data and their usual data retention rules. During an implementation and post implementation, you must be aware of the data types and then plan the data retention rules accordingly.

Archiving

Note: The Archive Engine is only available for selected Oracle Utilities Application Framework based products. Refer to your product documentation to verify its validity.

One of the most used techniques of managing data is Archiving. The idea is that you only have data in your database that is actively needed and any additional data is either archived to another place or simply deleted. The processing therefore is optimized against the active data, without having to ignore records no longer needed for processing.

Archiving is usually associated with transaction data as it typically has a limited live. Data is kept according to business practices or government regulations (especially around taxation records retention). Most customers keep a number of years of active transaction data and archive any data past the activity date.

The key to archiving is to know what to archive and to ensure that archiving that data does not violate a business rule or compromise integrity of the overall system. Therefore most of the activity in archive planning, is identifying the data to archive (transactional data), the criteria in which the data becomes valid for archive and what form the archive is going to take (another database, file or microfiche).

Determining the data to archive is an important first step. Typically transactional data is a candidate but there may be circumstances where master data is also archived. For example, you might archive customer records if the customer becomes deceased. The following types of tables are ideal candidates for archiving:

Transaction Tables with large amounts of records – Archiving such tables can have double gains. First of all you are removing records that have to be ignored by the processing and also you may be freeing up valuable space as you reduce the sizes of the tables.

Transient data – Typically data that is included into interfaces may be loaded into tables prior to loading into the main transaction tables. This is known as staging. It is a common technique as validation can be executed against the staging area and only validated data passed to the transaction data. This separates invalid data from valid data. Invalid records are kept in the staging area until they are resolved. The only issue then is that records that are valid must be removed from the staging area on a regular basis. A common principal in records retention is if you can get a record from someplace else then you can remove one of the copies. For example, if you print off an email, you still have a record of it, therefore you can delete the electronic copy or destroy the physical copy, you do not need both. This principle applies to

Oracle Utilities Application Framework - Technical Best Practices

the staging area, where the valid records are already in the transaction tables so they can be safely removed from the staging area. The only exception to this principal is where the business process or regulation requires you to keep both.

Living data – Data pertaining to living customers needs to be retained for processing but if you work in a deregulated market where you must surrender details of customers as part of the process of transferring them to a competitor (a.k.a. losing a customer) then they may become candidates for archiving. The validity of this case may vary according to business practices or regulations. The same principle can be applied to customers who become deceased. What data, if any, do you retain when a customer dies? Does the data become a candidate for archival?

Once the data to be archived is determined the next step is to identify the criteria that will be used to identify the data is valid to archive. Usually, archive criterion are time based (e.g. older than x months) but can be quite sophisticated. The criteria will be set by business processes or government legislation but there are a few additional criteria that need to also consider:

Active Data – If a record satisfies the time criteria but is somehow still active then it is not eligible for archival. For example, if a payment is older than business rules recommend, but is in dispute for some reason then it cannot be archived.

Integrity – When archiving data, no integrity rule (referential or otherwise) should be broken. You must guarantee that archiving of this record will not adversely affect other records in the system or even prevent the system from operating. For each record deletion, any related tables should be examined to see if any condition prevents the deletion (or they should be covered in the archive as well – this is known as a cascade archive).

De-archive - One of the major misconceptions about archiving in a data management aspect is the ability to return data from an archive (a.k.a. de-archived). Not all archiving facilities do this as typically the space saving benefits of archiving are somewhat diluted by this ability and the overhead of re-integrating the archived data into active data can be quite difficult and messy. The best advice is to avoid this situation altogether and ensure the criteria used covers data is not going to be de-archived.

Note: The current version of the archive facility inbuilt in the Oracle Utilities Application Framework does not support de-archival.

Once the data to be archived has been identified and the criteria agreed and implemented then the format of the archive needs to be taken into account. There are a number of options that can be considered:

Using the Archive facility – If there is a requirement for the data to made available online but not active in the system, then consider setting using the inbuilt archiving facility provided in the Oracle Utilities Application Framework .

File based – If there is NO requirement to view the data but make it available to offline viewers (such as data loaders or even microfiche) then archiving data to a sequential file for

Oracle Utilities Application Framework - Technical Best Practices

reference is alternative. This data can then be archived to a tape or to a location that can be retrieved and viewed at a later stage. The format of the file is site based but can be as simple as a database export or as complex as formatted fixed format multi-record data files. The archive facility provided with the Oracle Utilities Application Framework provides a facility to archive the data to a file.

Purge Only – One of the most common archiving techniques is to simply delete the data from the database. This is applicable to all techniques eventually (you cannot keep the data forever). In this technique the records identified to be archived (passing the criteria) are simply deleted from the database to release space.

Archiving data on a regular basis can remove inactive data from your data which may improve performance and save disk space. Generally, customers run archiving processes at least once a week.

Data Retention Guidelines

One of the most common requirements that must be considered during an implementation and post implementation is how long to retain data in the active production database. Even though disk space is becoming cheaper over time, there is always a cost based limit to how much should be stored.

Typically the customer's business practices that dictate the amount of historical data stored in the database at any time. Therefore there are a number of key factors that govern data retention:

Government legislation – Most countries have a legal requirement to have information available in a computer system. Typically this requirement separates how much should be active and how much should be retained in a passive medium (e.g. archive or available in a backup format).

Business requirements - There is usually a business requirement to work on historical data. For example the business may need to be able to process financial data over a number of years. This requirement typically dictates the amount of historical data kept.

Physical capacity of the hardware – At the end of the day any machine used for any software has a physical limit. This limit is usually based upon business requirement and cost to the business.

Table Identifiers – All tables in the Oracle Utilities Application Framework based products have identifiers (some have multiple). The physical key size can be an indicator of the limit of the records that can be kept. It should be noted that most of the Oracle Utilities Application Framework based products have designed their key sizes to cover the majority of expected data cases in the field.

Audit requirements – Typically, each site will have some sort of auditing function, within the company or an independent auditing firm. This auditing capability that will expect a certain amount of historical data, directly or indirectly in the product, to adequately operate an audit. This requirement is usually forgotten by most sites until they need it. During an

Oracle Utilities Application Framework - Technical Best Practices

implementation, or soon after, the audit requirements should be clarified and factored into any data retention policy. It should be noted that the product's themselves do not impose any particular data retention policy.

Data Retention tends to apply to specific data types only:

Transactional data is subject to data retention rules as it is the data that grows over time.

Master data tends to remain in the database for the life of the system, even in a deregulated market, for fraud prevention purposes.

Meta-Data is not covered by data retention policy as it needs to be there to make the product operate so is rarely archived or removed

Configuration data will vary, as it is wide ranging, but generally is also rarely archived or removed.

In terms of their platform, customers should monitor the data growth to reach a decision about archiving, if they wish to do so, or simply removing the data.

Typically the status of a record in the staging tables used for interfaces becomes Complete then it becomes redundant data. The data will be reflected in the main product tables and is not required in the staging tables anymore. Removal of completed records, on a regular basis, can have storage benefit as well as performance benefit.

Removal of Staging Records

The product uses a staging concept for most of the major interfaces. This involves a process, known as Process X, to load the staging tables and then a base product background process is run to validate and copy the valid staging data into the relevant main tables. When records are loaded initially, the status of the records is set to Pending indicating they are ready to process. Once the relevant base product background process processes them, then the status is changed to either Completed (for valid records) or Error (for invalid records). Invalid records can be corrected using the relevant staging online query to manually resolve the error.

This is summarized in the figure below:

Oracle Utilities Application Framework - Technical Best Practices

Complete Errored Pending Main Staging tables Process X status update Product validate Process Product load
Complete
Errored
Pending
Main
Staging
tables
Process X
status
update
Product
validate
Process
Product
load
Oracle Utilities Application Framework - Technical Best Practices Complete Errored Pending Main Staging tables Process X

Source System

Oracle Utilities Application Framework - Technical Best Practices Complete Errored Pending Main Staging tables Process X

Input Staging

Product

update

 
 
status
   

status

Input Staging Product update status Product Process X Process extract Main Staging (run nbr) extract tables
 

Product

Product Process X
Product Process X
Product Process X

Process X

Product Process X
Product Process X

Process

extract

     

Main

Staging (run nbr)

 

extract

tables

Pending

 

Errored

Complete

Output Staging

 
Oracle Utilities Application Framework - Technical Best Practices Complete Errored Pending Main Staging tables Process X

Target System

Figure 11 – Staging Process Overview

It is assumed that completed staging records are no longer required, after a period of time, as the data they contain has been reflected have been reflected in the main tables. There is no business reason to keep completed staging records after they have been completed for long periods of time.

Regular cleanups of the staging tables to remove completed records will have great performance benefits on interfaces. Successful sites run the provided purge jobs to improve performance and reduce disk space usage.

To decide when to recommended:

run these purge jobs

and what parameters to pass to them the following is

Work out with the business at the site how long they wish to retain the number of completed records. You can stress to them that NO important data is lost in purging completed records as their data is reflected in main tables. The value is used for the NO-OF-DAYS batch parameter passed to the job. The value is the number of days not the number of business days (e.g. A value of 14 for NO-OF-DAYS means 2 weeks).

For the To Do Purge job, there are additional parameters to decide the specific To-Do type to purge or ALL (DEL-TD-TYPE-CD and DEL-ALL-TD-SW). Work with the business to decide if this job is to be run once (for all To Do types) or multiple times for each To-Do Type. Successful customers run it to delete all To Do types to reduce the number of jobs to run.

Oracle Utilities Application Framework - Technical Best Practices

Decide the frequency based upon data growth of each table. Ideally these purge process should be run each business day at the end of the nightly batch schedule to keep the optimum but should be run once a week at a minimum.

Partitioning

One of the most popular data management techniques is the use of partitioning on tables. Partitioning enables tables and indexes to be split into smaller, more manageable components.

Partitioning allows a table, index or index-organized table to be subdivided into smaller pieces. Each piece of database object is called a partition. Each partition has its own name, and may optionally have its own storage characteristics, such as having table compression enabled or being stored in different tablespaces. From the perspective of a database administrator, a partitioned object has multiple pieces which can be managed either collectively or individually. This gives the administrator considerably flexibility in managing partitioned objects. However, from the perspective of the product, a partitioned table is identical to a non-partitioned table; no modifications are necessary when accessing a partitioned table using SQL.

Partitioning has known benefits:

Divide and Conquer - With partitioning, maintenance operations can be focused on particular portions of tables. For example, a database administrator could back up a single partition of a table, rather than backing up the entire table. For maintenance operations across an entire database object, it is possible to perform these operations on a per-partition basis, thus dividing the maintenance process into more manageable chunks.

Parallel Execution of SQL – Most databases will sense that the table is partitioned and run SQL statements (including SELECT and INSERT statements) in multiple threads. Each of the partitions can be thought of as an individual table and the database uses this.

Pruning – Queries operating on one partition can run substantially faster due to reduced size of the data to search.

Partition Availability - Partitioned database objects provide partition independence. This characteristic of partition independence can be an important part of a high-availability strategy. For example, if one partition of a partitioned table is unavailable, all of the other partitions of the table remain online and available; the product can continue to execute queries and transactions against this partitioned table, and these database operations will run successfully if they do not need to access the unavailable partition.

When using partitioning you should ensure that major processes accessing the table do not cross partition boundaries. Crossing from one partition to another can cause slight delays as physically the table has been separated into individual files per partition. This situation may be avoided when designing the partitioning regime for the table.

Oracle Utilities Application Framework - Technical Best Practices

The key to success to partitioning is recognizing which tables are candidates for partitioning and what partitioning scheme to use. Partitioning must be planned and designed into a database to ensure that the partitioning regime is optimal for your products.

The ideal candidates for partitioning are large tables with a small number of indexes. The benefits of partitioning are optimal for large tables rather than applying the principles across all tables. The minimal number of indexes is a criterion to minimize the likelihood of crossing partition boundaries in SQL.

Once the tables are chosen to be partitioned then the next step is to decide the number of partitions to implement. The rule of thumb is to choose the number of partitions so that any SQL that accesses the table using the indexes will minimize crossing partition boundaries. If your product is multi-threaded then each thread of the process needs to remain within a partition. In this case the number of partitions should be equal to the number of threads (or a divisor). For example, if a major process runs in 10 threads then the number of partitions could be 10, 5 or 2. Each of the numbers ensures that each thread stays within a partition.

Once the number of partitions is chosen the next step is to decide which partition scheme you can use. Database vendors have implemented numerous ways of dividing a table into partitions. Each of these schemes (and sometimes combination) tells the database how to split the data into the various partitions as well as how to access the partitions. The most common partitioning scheme used is known as range partitioning where a range of values (index based) is used to designate the partition a record is placed within. Refer to the partitioning documentation provided by your database vendor for details of all the different schemes that can be used to partition your table data.

Table partitioning represents the easiest method of data management and is usually the first data management technique used before other techniques are considered.

Compression

Note: Database level compression varies from one database vendor to another. In some cases, it is included as an optional component of the database and in other cases, it is a separate piece of software that must be obtained from the database vendor (or an approved third party).

A technique that is starting to emerge from the database vendors is compression of data. This can be done at a database level (global) or a table level and typically requires no changes to a product to implement.

As the data is stored and retrieved it is compress and decompressed before passing back to the product. As far as the product is concerned it is unaware that the data is compressed or not. This appeals to database administrators as they can experiment with compression without the need to involve the product developers.

Database systems have not heavily utilized compression techniques on data stored in tables. One reason is that the trade-off between time and space for compression is not always attractive for databases. A typical compression technique may offer space savings, but only at a cost of much

Oracle Utilities Application Framework - Technical Best Practices

increased query time against the data. Furthermore, many of the standard techniques do not even guarantee that data size does not increase after compression.

Over time, database vendors have addressed the trade-off by implementing unique compression techniques. It has come to a stage where virtually no negative impact on the performance of queries against compressed data; in fact, it may have a significant positive impact on queries accessing large amounts of data, as well as on data management operations like backup and recovery. Each database vendor will supply guidelines to effectively use of compression to minimize any overhead for all SQL statements (including INSERT’s, UPDATE’s etc) and which tables are the best candidates for compression.

Note: Not all tables in Oracle Utilities Application Framework based products will benefit from compression as the database vendors have imposed efficiency rules that may preclude specific tables.

Database Clustering

One of the more advanced features that have emerged as a valid data management technique is the ability for databases to be clustered. This is a relatively new technique for data management, as most people associate clustering with availability rather than management of data volumes.

Database clustering provides the ability for a database to be spread on more one machine but seem to the product as a single database. The database management system manages all the synchronization and load balancing of transactions automatically. It was designed to support the availability of the database in case of a hardware failure in one of the nodes of the cluster.

Experience within the industry has shown that using the clustering capabilities can also improve performance when large amounts of data are involved. Logically clustering enables the database to access more power and spreading the workload across machines.

This technique is applicable where the volume of the data is impacting database performance. One of the major symptoms is CPU usage on database is consistently high, no matter what tuning is performed at the database and product level. This implies that the database is CPU bound and while there may be an option to add more CPU’s to the server, considering clustering the data becomes a viable alternative.

While implementing clustering has been made progressively easier with each release of the database management system, implementing clustering must be planned using the guidelines outlined by the database vendor. Refer to the documentation provided on clustering by your database vendor.

Backup and Recovery

One of the most critical components of the implementation and ongoing support of product at a site is the ability to backup the data and software to ensure business continuity. Equally important is the ability to easily restore that data if the need arises.

Oracle Utilities Application Framework - Technical Best Practices

Typically a site will have a preferred regime and set of tools that is used to achieve a backup and recovery of all systems that the site. When implementing product this regime and set of tools is typically reused to cater for the products and business needs.

When considering a backup regime for product the following should be considered:

There is nothing within product technically that warrants a particular approach to Backup and Recovery. Most customers continue to use their existing approaches.

There is nothing within product technically that warrants a particular backup and recovery tool. Most customers use the native tools provide with their platforms, for cost savings, but some customer have purchased additional infrastructure to take advantage of faster backups/recoveries or additional features provided by such tools.

If your site does not have a backup regime already the following can be considered default industry practice:

Use Hot Incremental backups on production during the business week to reduce outage

times. Do a FULL backup (Hot or Cold) once a week at least to reduce recovery times.

Verify backups after they are taken to reduce risk of delayed recoveries.

On non-production, consider either the same regime as production or consider regular FULL backups at peak periods in an implementation.

Writing Files Greater than 4GB

Note: This advice applies to products that use the COBOL support contained within the Oracle Utilities Application Framework. 64 Bit java based code automatically supports files greater than 4GB.

Note: This change should not be attempted if the interface using the file is 32 bit as this only applies to 64 bit COBOL on a 64 Bit operating system.

By default, any 64 bit COBOL based extract product process will create a file up to a 4GB limit. In the unlikely event that the extract process needs to create a file bigger than 4GB there is a way of instructing the COBOL runtime to support larger files.

You must create a text based extension configuration file (say cmextfh.cfg) with the following contents:

[XFH-DEFAULT]

FILEMAXSIZE=8

IDXFORMAT=8

You then place this configuration file in a location that can be referred to by the runtime. You can either deposit the file in $SPLEBASE/scripts (or %SPLEBASE%\scripts) or in a site specific central location. To enable support for larger formats your initialize the EXTFH environment variable with the location of the configuration file. For example:

set EXTFH=D:\oracle\TUGBU\scripts\cmextfh.cfg ( for Windows)

Oracle Utilities Application Framework - Technical Best Practices

export EXTFH=/oracle/TUGBU/scripts/cmextfh.cfg (for Linux/UNIX)

This can be done in your .profile (for Linux/UNIX) or using the facilities outlined in Custom Environment Variables or JAR files.

For additional details and additional parameters refer to My Oracle Support Doc Id: 817617.1.

Client Computer Best Practices

Even though product is browser based there are some practices on the client machine that affects performance. This section outlines the practices about the client machine that have proved beneficial.

Make sure the machine meets at least the minimum specification

As part of the installation documentation for each installation of product, the minimum and recommended hardware for the client is specified. Typically SPL takes the following into consideration when specifying this information:

The minimum and recommended hardware as specified by Microsoft for the operating system

used for the client. A typical set of other applications running on the machine, typically Office style applications.

While all care is taken in specifying the hardware will cost in mind, experience has shown that customers need to review the specification in light of their internal standards.

Internet Explorer Caching Settings

The Internet Explorer settings used must match the recommended settings as outlined in the product "Installation Guide", which includes:

Internet Explorer cache settings should be set to Automatically NOT Every visit to EVERY page. Certain elements on the browser user interface pages are cached on the client for performance reasons. Incorrect setting of the cache settings in Internet Explorer will increase bandwidth usage significantly and degrade performance, as screen elements will be retrieved on each rather than from the cache. The correct setting is shown below:

• Internet Explorer cache settings should be set to Automatically NOT Every visit to EVERY page

Figure 12 – Example Cache Setting

Java script must be enabled. The product framework uses javascript to implement the browser user interface.

Oracle Utilities Application Framework - Technical Best Practices

HTTP 1.1 supports must be enabled. If you use a proxy to get to the server, then also check "Use HTTP 1.1 through proxy connections".

Oracle Utilities Application Framework - Technical Best Practices • HTTP 1.1 supports must be enabled. If

Figure 13 – HTTP 1.1 Settings

Clearing Internet Explorer Cache

Between upgrades, it is advisable to manually clear the Internet Explorer cache to remove any elements that may be still in the cache that are not applicable to the new version. This is a rare situation but sometimes clearing the cache can ensure corrections in caching or inappropriate elements "left over" from upgrades from being incorrectly displayed.

Optimal Network Card Settings

Typically the manufacturers of NIC devices provide a number of configuration settings to allow further optimization of network transmit and receive settings. Typically the defaults provided with the card are sufficient for the needs of the network traffic transmitted and received by the machine.

It may be further optimal to investigate whether changing the settings can improve performance at your site (particularly the number of network buffers used). Altering the settings may improve performance but also may adversely affect performance (due to higher CPU usage). Typically the majority of customers use the default settings provided by the manufacturer.

Network Best Practices

The product ships data a network between the clients and the various components of the architecture. This section outlines some of the practices to optimize the network elements of a configuration.

Network bandwidth

One of the most common questions asked about the product is the network footprint of the Oracle Utilities Application Framework based product. This question is difficult to answer precisely for a number of reasons:

The amount of data sent up and down the network is dependent on how much change is done by an individual user at the front end of the product. Only the elements changed by the end user are transmitted back to the server. The more the user changes the more the data is transmitted. Given the numerous possible permutations and combinations for data changes at any given time, this can be hard to estimate.

The Oracle Utilities Application Framework supports partial object faulting. This means the framework only sends data to the client that is being displayed. In screen with more than one tab, the framework only sends the data for the tabs that are accessed by the end user. This

Oracle Utilities Application Framework - Technical Best Practices

means only part of the overall object required by the screen. Most users tend to operate on a small number of tabs but this can vary from transaction to transaction.

All transmission between the client and server are compressed using HTTP 1.1 natively supported compression. This can reduce the actual size of the data transmission considerably depending on the content of the changes.

Screen data is cached on the client machine that can be reused. The product takes advantage of the caching facilities in the HTTP 1.1 protocol and the browser caching functionality. For example, screen definitions and graphics are stored on the client machine to reduce network footprint. Upon every transmission of a screen element the data in the cache is tagged with an expiry date to indicate the life of the element in the cache. Use of client side caching can reduce the network traffic considerably with some customers reporting up to 90% reduction in network traffic when this caching is enabled.

To provide an estimate for the network footprint, the range between 10-200k, on average, per transaction is quoted to adequately cover all the aspects outlined above. This value has been based upon experiences with customers.

It is possible to track network bandwidth using a log analyzer against the W3C standard access.log produced by your Web Application Server. Refer to the Performance Troubleshooting Guides for more information about this log.

Ensure legitimate Network Traffic

One of the major factors on performance is the amount of legitimate traffic on the network. The traffic to and from product shares the bandwidth with all other traffic on the network. If there is any network congestion than all transactions from all network-based applications will be adversely affected.

Some customer sites have found that traffic that is not legitimate can adversely affect network performance. Traffic that is considered not legitimate includes:

Traffic generated from viruses and Trojans – There are a plentiful number of viruses and Trojans in the general Internet network that can cause bandwidth issues. Most sites have regular virus protection to minimize the impact to your network but not all. While it is not a requirement within product to have such protection, the industry in general recognizes the need for such protection.

Unauthorized large transfers – Large transfers of data can adversely affect performance as it can soak up bandwidth if the transfer is not configured correctly. There have been instances of large FTP transfers slowing down traffic on lower bandwidth networks.

Ensuring that only legitimate traffic is on a network can provide greater bandwidth for all applications (including product) and improve consistency.

Regularly check network latency

Oracle Utilities Application Framework - Technical Best Practices

In a network, latency, a synonym for delay, is an expression of how much time it takes for a packet of data to get from one designated point to another. In some usages, latency is measured by sending a packet that is returned to the sender and the round-trip time is considered the latency. The greatest impact on performance is inconsistency latency.

The latency assumption seems to be that data should be transmitted instantly between one point and another (that is, with no delay at all). The contributors to network latency include:

Propagation - This is simply the time it takes for a packet to travel between one place and another at the speed of light.