Sunteți pe pagina 1din 99

An Oracle White Paper June 2011

Technical Best Practices Oracle Utilities Application Framework

Oracle Utilities Application Framework - Technical Best Practices

Technical Best Practices ..................................................................... 5 Conventions used in this whitepaper .............................................. 5 Introduction ......................................................................................... 6 Background of Oracle Utilities Application Framework ....................... 6 Installation Best Practices ................................................................... 8 Read the Installation Guide ............................................................. 8 Ensure the prerequisites are installed ............................................. 8 Environment Practices .................................................................... 9 Using multiple administrators ........................................................ 10 Checking Java Installation ................................................... 10 .............................................. 11 Checking COBOL Installation ................ 12 Additional Oracle WebLogic Installation settings .......................................... 13 COBOL License Errors in Batch ................................................ 13 Location of Installation Logs ......................................... 13 XML Parser Errors in installation AppViewer cannot Co-Exist in Archive Mode ...................... 14 ........................ 15 Implementing Secure Protocols (https/t3s) General Best Practices ..................................................................... 17 Limiting production Access ........................................................... 17 Regular Collection and Reporting of Performance Metrics ........... 18 Respecting Record Ownership ..................................................... 18 Backup of Logs ............................................................................. 19 Post Process Logs ........................................................................ 19 Check Logs For Errors .................................................................. 20 Optimize Operating System Settings ............................................ 20 Optimize connection pools ............................................................ 21 Read the available manuals .......................................................... 23 Technical Documentation Set Whitepapers available ................... 25 Implementing Industry Processes ................................................. 26 Using Automated Test Tools ......................................................... 27

Oracle Utilities Application Framework - Technical Best Practices

Custom Environment Variables or JAR files ................................. 28 Help and AppViewer can be used standalone .............................. 29 Re-Register only when necessary ................................................ 29 Secure default userids .................................................................. 30 Consider different userids for different modes of access .............. 30 Dont double audit ......................................................................... 31 Use Identical Machines ................................................................. 31 Regularly restart machines ........................................................... 31 Avoid using direct SQL to manipulate data ................................... 32 Minimize Portal Zones not used .................................................... 32 Routine Tasks for Operations ....................................................... 33 Typical Business Day .................................................................... 33 Login Id versus Userid .................................................................. 34 Hardware Architecture Best Practices .......................................... 35 Failover Best Practices ................................................................. 39 Online and Batch tracing and Support Utilities .................... 41 ................................. 41 General Troubleshooting Techniques Data Management Best Practices ..................................................... 43 Respecting Data Diversity ............................................................. 43 Archiving ....................................................................................... 44 Data Retention Guidelines ............................................................ 46 Removal of Staging Records ........................................................ 47 Partitioning .................................................................................... 49 Compression ................................................................................. 50 Database Clustering ...................................................................... 51 Backup and Recovery ................................................................... 51 Writing Files Greater than 4GB ..................................................... 52 Client Computer Best Practices ........................................................ 53 Make sure the machine meets at least the minimum specification 53 Internet Explorer Caching Settings ............................................... 53

Oracle Utilities Application Framework - Technical Best Practices

Clearing Internet Explorer Cache .................................................. 54 Optimal Network Card Settings ..................................................... 54 Network Best Practices ..................................................................... 54 Network bandwidth ........................................................................ 54 Ensure legitimate Network Traffic ................................................. 55 Regularly check network latency ................................................... 55 Web Application Server Best Practices ............................................. 56 Make sure that the access.log is being created ............................ 56 Examine Memory Footprint ........................................................... 57 Optimize Garbage Collection ........................................................ 58 Turn off Debug .............................................................................. 58 Load balancers .............................................................................. 58 Preload or Not? ............................................................................. 59 Native or Product provided utilities? .............................................. 60 Hardware or software proxy .......................................................... 61 What is the number of Web Application instances do I need? ...... 61 Configuring the Client Thread Pool Size ....................................... 62 Defining external LDAP to the Web Application Server ................ 64 Synchronizing LDAP for security ................................................... 66 Appropriate use of AppViewer ...................................................... 68 Fine Grained JVM Options ............................................................ 69 Customizing the server context ..................................................... 70 Clustering or Managed? ................................................................ 71 Allocate port numbers appropriately ............................................. 73 Monitoring and Managing the Web Application Server using JMX 74 Enabling autodeployment for Oracle WebLogic console ..... 76 ....... 76 Password Management solution for Oracle WebLogic ................... 77 Error configuring Oracle WebLogic credentials ....................................................... 78 Corrupted SPLApp.war .............................................. 78 Web Application Server Logs

Oracle Utilities Application Framework - Technical Best Practices

IBM WebSphere Specific Advice ......................................... 79 Business Application Server Best Practices ..................................... 82 Distributed or local installation ...................................................... 82 Number of Child JVMS .................................................................. 82 COBOL Memory management ...................................................... 83 Cache Management ...................................................................... 84 Monitoring and Managing the Business Application Server using JMX 85 Database Connection Management .............................................. 88 XPath Memory Management ............................................... 88 Database Best Practices ................................................................... 89 Regularly Calculate Database Statistics ....................................... 90 Ensure I/O is spread evenly across available devices .................. 91 Use the Correct NLS settings (Oracle) .......................................... 91 Monitoring database connections ........................................ 92 Consider changing Bit Map Tree parameter ................................. 92 OraGenSec command line Parameters ........................................ 93 SetEnvId command line Parameters ............................................. 93 Building the Data Model ................................................................ 94

Oracle Utilities Application Framework - Technical Best Practices

Technical Best Practices


This white paper outlines the common and best practices used by IT groups at sites using Oracle Utilities Application Framework based products and Oracle internal studies, around the world, that have benefited sites in some positive way. This information is provided to guide other sites in implementing or maintaining the product. While all care has been taken in providing this information, implementation of the practices outlined in this document may NOT guarantee the same level of (or any) improvement. Some of these practices may not be appropriate for your site. It is recommended that each practice be examined in light of your particular organizational policies and use of the product. If the practice is deemed beneficial to your site, then consider implementing it. If the practice is not appropriate (e.g. for cost and other reasons), then it should not be considered. This whitepaper covers V2.x and above of the Oracle Utilities Application Framework based products. Where advice is applicable to a particular version of the product, a specific reference to that version is displayed. For V1.x customers, specific information for V1 is located in the V1 Addendum version of this document. Note: For publishing purposes, the word "product" will be used to be denote all Oracle Utilities Application Framework based products. Note: Advice in this document is primarily applicable to the latest version of the Oracle Utilities Application Framework at time of publication. Some of this advice may apply to other versions of the Oracle Utilities Application Framework and may be applied at site discretion. Note: In some sections of this document the environment variable $SPLEBASE (or %SPLEBASE%) is used. This denotes the root location of the product install. Substitute the appropriate value for the environment used at your site.

Conventions used in this whitepaper


The advice in this document applies to any product based upon Oracle Utilities Application Framework versions 2.1 and above. Refer to the installation documentation to verify which version of the framework applies to your version of the product. For publishing purposes the specific facilities and instructions for specific framework versions will be indicated with icons: Advice or instructions marked with this icon apply to Oracle Utilities Application

Oracle Utilities Application Framework - Technical Best Practices

Framework V2.1 based products and above. Advice or instructions marked with this icon apply to Oracle Utilities Application Framework V2.2 based products and above. Advice or instructions marked with this icon apply to Oracle Utilities Application Framework V4.0 based products and above. Advice or instructions marked with this icon apply to Oracle Utilities Application Framework V4.1 based products and above.

Introduction
Implementation of the product at any site introduces new practices into the IT group to maintain the health of the system and provide the expected service levels demanded by the business. While configuration of the product is important to the success of the implementation (and subsequence maintenance), adopting new practices can help ensure that the system will operate within acceptable tolerances and support the business goals. This white paper outlines some common practices that have been implemented at sites, around the globe, that have proven beneficial to that site. They are documented here so that other sites may consider adopting similar practices and potentially deriving benefit from them as well. The recommendations in this document are based upon experiences from various sites and internal studies, which have benefited from implementing the practices outlined in the document.

Background of Oracle Utilities Application Framework


The Oracle Utilities Application Framework is a reusable, scalable and flexible java based framework which allows other products to be built, configured and implemented in a standard way. When Oracle Utilities Customer Care & Billing was migrated from V1 to V2, it was decided that the technical aspects of that product be separated to allow for reuse and independence from technical issues. The idea was that all the technical aspects would be concentrated in this separate product (i.e. a framework) and allow all products using the framework to concentrate on delivering superior functionality. The product was named the Oracle Utilities Application Framework (oufw is the product code). The technical components are contained in the Oracle Utilities Application Framework which can be summarized as follows: Metadata The Oracle Utilities Application Framework is responsible for defining and using the metadata to define the runtime behavior of the product. All the metadata definition and management is contained within the Oracle Utilities Application Framework.

Oracle Utilities Application Framework - Technical Best Practices

UI Management The Oracle Utilities Application Framework is responsible for defining and rendering the pages and responsible for ensuring the pages are in the appropriate format for the locale. Integration The Oracle Utilities Application Framework is responsible for providing the integration points to the architecture. Refer to the Oracle Utilities Application Framework Integration Overview for more details. Tools The Oracle Utilities Application Framework provides a common set of facilities and tools that can be used across all products. Technology The Oracle Utilities Application Framework is responsible for all technology standards compliance, platform support and integration.

The figure below summarizes some of the facilities that the Oracle Utilities Application Framework provides:
Meta Data
Layout Personalization Scripting Roles Rules Language Localization Business

UI Management
Zones Portal Language Locale BPA Scripting UI Maps

Integration
XAI Web Services Staging

Tools
Scheduler Dictionary Conversion To Do Security Auditing Algorithm Scripting

Technology
Multi-DB XML Services J2EE AJAX SOA

Services
Business

Objects
Maintenance

Objects
DB Structure

Figure 1 - Overview of Oracle Utilities Application Framework components

There are a number of products from the Tax and Utilities Global Business Unit as well as from the Financial Services Global Business Unit that are built upon the Oracle Utilities Application Framework. These products require the Oracle Utilities Application Framework to be installed first and then the product itself installed onto the framework to complete the installation process. There are a number of key benefits that the Oracle Utilities Application Framework provides to these products: Common facilities The Oracle Utilities Application Framework provides a standard set of technical facilities that mean that products can concentrate in the unique aspects of their markets rather than making technical decisions.

Oracle Utilities Application Framework - Technical Best Practices

Common methods of configuration The Oracle Utilities Application Framework standardizes the technical configuration process for a product. Customers can effectively reuse the configuration process across products. Common methods of implementation - The Oracle Utilities Application Framework standardizes the technical aspects of a product implementation. Customers can effectively reuse the technical implementation process across products. Quicker adoption of new technologies As new technologies and standards are identified as being important for the product line, they can be integrated centrally benefiting multiple products. Multi-lingual and Multi-platform - The Oracle Utilities Application Framework allows the products to be offered in more markets and across multiple platforms for maximized flexibility Cross product reuse As enhancements to the Oracle Utilities Application Framework are identified by a particular product, all products can potentially benefit from the enhancement.

Note: Use of the Oracle Utilities Application Framework does not preclude the introduction of product specific technologies or facilities to satisfy markets. The framework minimizes the need and assists in the quick integration of a new product specific piece of technology (if necessary).

Installation Best Practices


During the initial phases of an implementation, a copy of the product will need to be installed. During the implementation a number of copies of additional copies will be installed, including production. This section outlines some practices that customers have used to make this process smooth.

Read the Installation Guide


One of the most important pieces of advice in this document to implement is to read the installation guide. It provides valuable information about what needs to be installed and configured as well the order of the installation. Failure to follow the instructions can cause unnecessary delays to the installation. If you are upgrading to a new version, read the new installation guide as well as it will contain instructions on how to upgrade to the new version as well as details of what has been changed in the new version.

Ensure the prerequisites are installed


When installing there is a number of third party prerequisite software that must be obtained (i.e. downloaded) prior to the actual installation of product software can commence. Read the Installation Guide and Quick Installation Guide to download and install the prerequisite software prior to installing product.

Oracle Utilities Application Framework - Technical Best Practices

Note: For customers who are upgrading, the installation of product and its related third party software is designed so that more than one version of product can co-exist.

Environment Practices
Note: There is a more detailed discussion of effective Environment Management in the Environment Management document of the Software Configuration Management series of whitepaper. Refer to that document for further advice. When installing product at a site, each copy of product is regarded as an environment to perform a particular task or group of tasks. Typically, without planning this can lead to a larger than anticipated number of environments. This can have a possible negative flow on effect by increasing overall maintenance effort and increasing resource usage (hardware and people), which may in turn cause delays in implementations. Customers to minimize the impact of environments on their implementations have used the following advice: At the start of the implementation decide the number of environments to use. Keep this to a minimum and consider sharing environments between tasks. Another technique associated with this is to specify an end date for each environment. This is the date the environment can be removed from the implementation. This can force rethinks on the number of environments that are to be used at an implementation and may force sharing. For each environment, consider the impact on the hardware and maintenance effort including the following: The time and resources it takes to install the environment. The time and resources it takes to keep the environment up to date including application of single fixes, rollups/service packs and upgrades. Do not forget application and management of customization builds. The time and resources to maintain the ConfigLab and Archiving facilities for multiple environments, if used at an implementation. This includes the setup and regular migrations that will be performed. Note: ConfigLab and Archiving only apply to certain Oracle Utilities Application Framework products The time and resources it takes to backup and restore environments on a regular basis. In some implementations, having different backup schemes for environments based upon tasks and update frequency for that environment, i.e. more updated = more frequent backup, may provide some savings. The time and resources to manage the disk space for each environment including regular cleanups.

Environments may be setup so that the database can be reduced to a single database instance with each environment having a different schema/owner. This will reduce the memory footprint of the DBMS on the machine but may reduce availability of the database instance is shut down (all environments are

Oracle Utilities Application Framework - Technical Best Practices

affected). For non-production, most customers create a database instance, for Oracle sites, for each environment and one database subsystem, for DB2/UDB sites, customers for each environment.

Using multiple administrators


By default, when installing product a single administrator account (usually referred to as splsys) is used to install and own the product. This is the default behavior of the installation and apart from specifying a different userid than the default splsys, it is possible to use other userids to own all or individual environments. For example, if the conversion team wishes to have the ability to start, stop and monitor their own environments, you can create another administrator account and install their copies of product using that userid. This allows the conversion team to control their own environments. If you did not have the ability to use multiple administrators than they may have access to all environments (as you would have to give them access to the splsys account). One of the advantages of this approach is that you can delegate management of a copy product to other teams without compromising other environments. Another advantage is that you can quickly identify UNIX resource ownership by user rather than trying using other methods. The only disadvantage is that to manage all copies of product you will need to logon to the additional administration accounts that own the various copies.

Checking Java Installation


Note: For Oracle Utilities Application Framework V4.1 and above , it is possible to use two differing Java Virtual Machine versions if COBOL is used as it is possible to configure the CHILD_JVM_JAVA_HOME separately. If this is the case then repeat this process for the CHILD_JVM_JAVA_HOME JVM. When the product is installed one of the first perquisites to be verified is the version of Java installed and referenced using the environment variable $JAVA_HOME (or %JAVA_HOME% on Windows). Whilst the product checks this version it can be checked manually prior to installation (and at anytime) using the following commands:

$JAVA_HOME/bin/java version Or (on Windows): %JAVA_HOME%\bin\java version


For example: Linux:

#> $JAVA_HOME/bin/java -version java version "1.6.0_18" Java(TM) SE Runtime Environment (build 1.6.0_18-b07) Java HotSpot(TM) 64-Bit Server VM (build 16.0-b13, mixed mode)

10

Oracle Utilities Application Framework - Technical Best Practices

AIX:

#> $JAVA_HOME/bin/java -version java version "1.6.0" Java(TM) SE Runtime Environment (build pap6460sr7ifix20100220_01(SR7+IZ70326)) IBM J9 VM (build 2.4, JRE 1.6.0 IBM J9 2.4 AIX ppc64-64 jvmap6460sr7-20100219_54049 (JIT enabled, AOT enabled) J9VM - 20100219_054049 JIT - r9_20091123_13891 GC - 20100216_AA) JCL - 20091202_01
Windows:

C:\> %JAVA_HOME%\bin\java -version java version "1.6.0_20" Java(TM) SE Runtime Environment (build 1.6.0_20-b02) Java HotSpot(TM) Client VM (build 16.3-b01, mixed mode, sharing)
HP-UX:

#> $JAVA_HOME/bin/java -version java version "1.6.0.10" Java(TM) SE Runtime Environment (build 1.6.0.10jinteg_11_mar_2011_09_19-b00) Java HotSpot(TM) Server VM (build 19.1-b02-jinteg:2011mar1107:33, mixed mode)
Note: Verify the java version number and operating mode (32/64 bit) against the Quick Installation Guide provided with the product.

Checking COBOL Installation


Note: Not all products support COBOL based extensions; therefore this section may not apply. Check with your installation guide for more details. By default, when the COBOL runtime is installed a license file is required to complete the installation as outlined in the Quick Installation Guide for the product. The license can be tracked using the process outlined in the Installation Guide or the following command:

cobsje -J $JAVA_HOME
Note: This command should be executed AFTER executing the splenviron[.sh] utility to initialize the environment variables used by the utilities and place the COBOL runtime in the PATH. If the license is NOT installed the response should be similar to the text below:

Error - No license key detected. Application Server requires a license key in order to execute. Please refer to your application supplier.

11

Oracle Utilities Application Framework - Technical Best Practices

Well this message indicates that there is an issue dealing with the license key on the server. If this message appears to remedy the situation it is recommended that the COBOL runtime be re-installed and re-initialized the license key using apptrack as per the Installation Guide for the product. If the license key is installed correctly the cobjse utility will return a message similar to the following:

#> cobsje -J $JAVA_HOME Java version = 1.6.0_20 Java vendor = Sun Microsystems Inc. Java OS name = SunOS Java OS arch = sparcv9 Java OS version = 5.10
Additionally the 64 Bit version of COBOL is required to be used for 64 bit platforms as indicated in the Installation Guide for the product. To verify that the COBOL runtime is 64 bit the f

cob v
This should return the output similar to the following:

cob64 -C nolist -CC -KPIC -A -KPIC -N PIC -v I see no work


The cob64 indicates the use of 64 bit COBOL.

Additional Oracle WebLogic Installation settings


When installing the Oracle WebLogic Server product as a prerequisite installation there are a number of additional advice that can be taken into account to optimize the installation: Avoid installing the Oracle WebLogic Server in the home directories of users for Linux installations. Other application Linux users, such as the Oracle Utilities Application Framework administration user, should not access the home directories or any subdirectory of the home directory. If the platform uses a hybrid 32/64 bit JDK, such as HP-PA, HPIA and Solaris64, then include the d64 flag when initiating the installation of Oracle WebLogic Server to ensure that 64 bit is used. For example, if installing in graphical mode using the Package installer: HP-UX/Unix:

java -d64 -jar wlsversion_generic.jar


Solaris64:

java -Xmx1024m -jar wlsversion_generic.jar


Windows:

12

Oracle Utilities Application Framework - Technical Best Practices

java -D64 -jar wlsversion_generic.jar COBOL License Errors in Batch


If the product has COBOL based background processes and the COBOL license is not installed correctly (see Checking COBOL Installation for more details) then an error message similar to the example below will be displayed:

cobjrun64: com.splwg.base.api.batch.ThreadPoolWorker.main ended due to an exception Exception in thread "main" com.splwg.shared.common.LoggedException: The following stacked messages were reported as the LoggedException was rethrown: com.splwg.base.support.context.ContextFactory.createDefaultCo ntext(ContextFactory.java:569): error initializing test context
To resolve this issue refer to the instructions in the Quick Installation Guide about installing the COBOL license.

Location of Installation Logs


When installing the product a log file is written for each component installed (Oracle Utilities Application Framework is a component of the installation, the product install is a separate installation component). The log contains all the messages pertaining to the installation process including any error messages for installation errors encountered. The log is located in the directory the installation was initiated from and the name is in the format:

install_<product>_<environment>.log Where:

<product> Product code of the product component you are installing. For example,
FW = Oracle Utilities Application Framework

<environment> Name of the environment that is being installed.


Check this log for any error messages during the installation process.

XML Parser Errors in installation


The Oracle Client is used by the installers and utilities to provide access to the Perl runtime and associated libraries used by the installer and utilities. This is the first configuration question in the installation process. The Oracle Client can be installed (if the product is not installed on a machine containing the Oracle Database software) or an existing ORACLE_HOME can be specified if the Oracle Database software is

13

Oracle Utilities Application Framework - Technical Best Practices

installed already on the machine (as it contains the Oracle Client in the installation). The value is stored in the ENVIRON.INI as the value for parameter ORACLE_CLIENT_HOME. Note: For Windows Server environments, both 32 bit client MUST be installed for use with the installation utilities. This is even if the 64 Bit Oracle Database software is installed on the same machine. If the Oracle Client or ORACLE_HOME is invalid then the following error will be returned by the installation utilities (and other installs):

Can't locate XML/Parser.pm in @INC (@INC contains: BEGIN failed--compilation aborted at data/bin/perllib/SPL/splXMLParser.pm line 3. Compilation failed in require at data/bin/perllib/SPL/splExternal.pm line 10. BEGIN failed--compilation aborted at data/bin/perllib/SPL/splExternal.pm line 10. Compilation failed in require at install.plx line 25. BEGIN failed--compilation aborted at install.plx line 25. Error: install.plx didn't finish successfully. Exiting.
Ensure that the ORACLE_CLIENT_HOME includes the perl subdirectory to rectify this issue.

AppViewer cannot Co-Exist in Archive Mode


The Application Viewer is an optional component that provides a meta data viewer for data dictionary, batch controls, to do types, javadoc etc.. It is primarily designed for use by the developers and key architects at your site 1. If the site decides to move between expanded mode 2 to archive mode (or visa versa) on Oracle WebLogic installations then when executing initialSetup[.sh] the product may report the following error:

AppViewer.war cannot co exist with AppViewer directory


For archive mode the AppViewer.war is required and for expanded mode the AppViewer directory is used. The error message indicates both exist. This can occur when the expanded mode is changed and the initialSetup[.sh] utility. To resolve this issue, depended on the value of WEB_ISEXPANDED parameter, the following recommended:
TABLE 1 APPVIEWER CO-EXIST ERROR RESOLUTION WEB_ISEXPANDED VALUE COMMENTS

Generally customers do not implement the AppViewer in production. Expanded mode is only available for Oracle WebLogic and Oracle Utilities Application Framework V4.0 and above.
1 2

14

Oracle Utilities Application Framework - Technical Best Practices

WEB_ISEXPANDED VALUE

COMMENTS

true false

Remove or rename AppViewer.war Remove or rename AppViewer directory.

Implementing Secure Protocols (https/t3s)


Note: For customers using Oracle Utilities Application Framework V4.1 and above the use of secure protocol can be enabled by specifying a HTTPS port using the configureEnv[.sh] a utility and specifying a port number under WebLogic SSL Port Number. Note: The instructions below are designed for Oracle WebLogic installations only. Additional steps are required in IBM WebSphere to enable secure transmission of data. Refer to the appropriate documentation for additional advice. Note: Some of the instructions below recommend changes to individual configuration files. These manual changes may be overridden by executions of the initialSetup[.sh] utility back to the product defaults. To retain the changes across invocations of the initialSetup[.sh] utility it is recommended to use custom templates and/or configuration file user exits. Refer to the Server Administration or Configuration and Operations Guide for more details of implementing custom templates and/or configuration file user exits. By default, all transmission of data is using the http and/or t3 3 protocol between the various tiers of the product. Whilst this default situation is sufficient for the vast majority of customers, some sites wish to implement the secure versions of these protocols for use with the product. The reason for their use is typically to encrypt all transmission of data from the client to the server and within the server tiers themselves. Note: Enabling https or t3s may result in higher resource usage due to the resource requirements to encrypt and decrypt data. The extent of the resource usage will vary from platform to platform. It is advised that customer compare performance between secure and non-secure protocols before committing to secure protocols. To implement the more secure protocol requires a number of changes and additional facilities to be enabled. The process below outlines the generic process for implementing the secure protocol: Obtain a digital certificate or generate a certificate from keytool for your organization from a trusted certificate authority. This is used for the encryption/decryption of data using the protocol. Note: The certificate provided with the J2EE Web Application Server installation is to be used for demonstration purposes only. It is highly recommended that alternative certificate be used for production environments.

The t3 protocol is only used for sites that have separated the Web Application and Business Application tiers using the Oracle WebLogic platform on selected versions of the Oracle Utilities Application Framework. The iiop protocol is used for the same scenario but for IBM WebSphere platforms.
3

15

Oracle Utilities Application Framework - Technical Best Practices

Configure J2EE Web Application Server SSL support to use the certificate as outlined in the documentation sites outlined below 4:
TABLE 2 J2EE SSL CONFIGURATION WEB APPLICATION SERVER REFERENCE

Oracle WebLogic 10 MP2 Oracle WebLogic 10.3.x

http://download.oracle.com/docs/cd/E13222_01/wls/docs100/secmanage/ssl.html http://download.oracle.com/docs/cd/E12840_01/wls/docs103/secmanage/ssl.html

Oracle WebLogic 10.3.3 IBM WebSphere 6.1 IBM WebSphere 7.x

http://download.oracle.com/docs/cd/E14571_01/web.1111/e13707/ssl.htm http://publib.boulder.ibm.com/infocenter/wasinfo/v6r1/index.jsp http://publib.boulder.ibm.com/infocenter/wasinfo/v7r0/index.jsp

Enable the HTTPS port on your environment using the console provided with your J2EE Web Application Server. Remember to reference the certificate you processed in the previous step. Note: For customers using Oracle WebLogic on Oracle Utilities Application Framework V4.1 and above the setting for WebLogic SSL Port Number will enable this facility without the need of the console. Note: If changes are made to the console then to retain the change across upgrades and service packs it is recommended to use custom templates or user exits to retain the setting. Refer to the Server Administration or Configuration and Operations Guide for more details of implementing custom templates. For Oracle WebLogic customers the config.xml templates may require changes.

Examine the $SPLEBASE/etc/conf directory (or %SPLEBASE%\etc\conf on Windows), unless otherwise indicated, for configuration files that use the protocol:
TABLE 3 SSL CONFIGURATION FILES CONFIGURATION FILE CHANGES

spl.properties

Change references to the t3 protocol to t3s, if exists Change references to the http protocol to https with the SSL port replacing the HTTP ports

web.xml web.xml.XAIApp ejb-jar.xml

Change references to the http protocol to https with the SSL port replacing the HTTP ports Change references to the http protocol to https with the SSL port replacing the HTTP ports Change references to the http protocol to https with the SSL port replacing the HTTP ports. This file is located under $SPLEBASE/splapp/businessapp/config/META-INF (or %SPLEBASE%\splapp\businessapp\config\META-INF on Windows)

Note: If these files are changed they may revert to the product template versions across service packs and upgrades. To retain change across service packs and upgrades it is advised to use custom templates and/or user exits. Refer to the Server Administration or Configuration and Operations Guide for more details.

For Oracle WebLogic customers, refer to the section Configuring Identity and Trust for the additional steps.

16

Oracle Utilities Application Framework - Technical Best Practices

Shutdown the J2EE Web Application Server to prepare to reflect the changes. Run the initialSetup[.sh] w command to reflect the changes into the server files Restart the J2EE Web Application Server. Ensure that any Feature Configuration options using the product browser that use the HTTP protocol as part of their options are also converted to HTTPS and the appropriate port number. Use the Admin F Feature Configuration menu option to check each of them. The Features will vary from product to product and version to version. Ensure that any XAI JNDI Server provider URLS using the product browser that use the http/t3 protocol as part of their options are also converted to https/t3s and the appropriate port number. Use the Admin X XAI JNDI Server menu option to maintain the JNDI server. Any customization that refers to the HTTP protocol such as custom algorithms or service scripts must also be converted from HTTP to HTTPs. For customers using the Multi-Purpose Listener (MPL), the use of secure protocol should alter the $JAVA_HOME\jre\lib\security\java.security (or %JAVA_HOME%/jre/lib/security/java.security on Windows) file to enable SSL support. Modify the WLPORT entry in the $SPLEBASE\splapp\mpl\MPLParamaterInfo.xml (or %SPLEBASE%/splapp/mpl/MPLParamaterInfo.xml on Windows) to use the SSL Port.
TABLE 4 JAVA SSL CONFIGURATION VENDOR CHANGES

Oracle WebLogic

Refer to http://download.oracle.com/javase/6/docs/technotes/guides/security/jsse/JSSERefGuide.html

IBM WebSphere

ssl.SocketFactory.provider=com.ibm.jsse2.SSLSocketFactoryImpl ssl.ServerSocketFactory.provider=com.ibm.jsse2.SSLServerSocketFactoryImpl

General Best Practices


This section outlines some general practices that have been successfully implemented at various product sites.

Limiting production Access


One of the guiding principles at successful sites is that production access is restricted to the processing necessary to run their business. This means that other non-mainstream work, such as ad-hoc queries, are either very limited or NOT performed on production at all. This may sound logical but a few sites

17

Oracle Utilities Application Framework - Technical Best Practices

have allowed access to production from inappropriate sources, which has had an adverse impact on performance. For example, it is not appropriate to allow people access to the production database through ad-hoc query tools (i.e. such as DB2 Control Center, SQL Developer, SQL*Plus etc). The freestyle nature of these tools can allow a single user to wreak havoc on performance with a single inefficient SQL statement. The database is not optimized for such unexpected traffic. Removal of this potentially inefficient access can typically, improve performance.

Regular Collection and Reporting of Performance Metrics


One of the major practices that successful customers perform is the regular collection of performance statistics, analysis of the statistics and reporting pertinent information to relevant parties within the organization as well as Oracle. Collection of such information can help identify bottlenecks and badly performing transactions, as well as help understand how the product is being used at your site. They offer proof of both good and bad performance and typically allow sites to gauge the extent of any issue. The product contains a number of collection points in the architecture that are useful for real time and offline collection of performance related data. Information on the collection points are documented in the whitepaper Performance Troubleshooting Guides. Using the guide, decide which statistics are important to the various stakeholders at your site, decide the frequency of collection and format of any output to be provided. Use your sites Service Level Agreement (SLA), if it exists, for guidance on what to report.

Respecting Record Ownership


In Oracle Utilities Application Framework V2.x and above, the concept of ownership of records was introduced. A data element was added to data to indicate the owner of the object and is used to protect key data supplied with the product from alteration or deletion. It is used by the online system to prevent the online users accidentally causing critical data failures. The owner is also used by the upgrade tools protect the data from deletion. The ownership of the record determines what you can do with that record: Framework - If the record is owned by Framework then implementation teams cannot alter or delete the record from the database as it is deemed critical to the running of the Framework. This is usually meta-data deemed important by the Framework team. For example the user SYSUSER is owned by the Framework. Product - If the record is owned by the product (denoted by the product name or Base) then some changes are permitted but deletion is not permitted as the record as it is necessary for the operation of the product. The amount of change will vary according to the object definition.

18

Oracle Utilities Application Framework - Technical Best Practices

Customer Modification - If the record is owned by Customer Modification then the implementation has added the record. The implementation can change and delete the record (if it is allowed by the business rules).

Basically you can only delete records that are owned by Customer Modification. It is possible to alter or delete the records at the database level, if permitted by database permissions, but doing this will produce unexpected results so respect the ownership of the records.

Backup of Logs
By default product removes existing log files from $SPLSYSTEMLOGS (or %SPLSYSTEMLOGS% for Windows platforms) upon restart. This is the default behavior of the product but may not be desirable for effective analysis as the logs disappear. To override this behavior the following needs to be done: A directory needs to be created to house the log files. Most sites create a common directory for all environments on a machine. The size allocation of that directory will depend on how long you wish to retain the log files. It is generally recommended that logs be retained for post analysis and then archived (according to site standards) after processing to keep this directory relevant. Typically customers create a subdirectory under <SPLAPP> to hold the files. Set the SPLBCKLOGDIR environment variable in the .profile (for all environments) or $SPLEBASE/scripts/cmenv.sh (for individual environments) to the location you specified in the first step. For Windows platforms then the environment can be set in your Windows profile or using %SPLEBASE%/scripts/cmenv.cmd. Logs will be backed up at the location specified in the format

<datetime>.<environment>.<filename> where <datetime> is the date and time of the restart, <environment> is the id of the environment (taken from the SPLENVIRON environment variable) and <filename> is the original filename of the log.
Once the logs have been saved you must use log retention principles to manage the logs under SPLBCKLOGDIR to meet your sites standards. Most sites archive the logs to tape or simply compress them after post processing the log files (See Post Process Logs for more details on post processing).

Post Process Logs


The logs written by the various components of product provide valuable performance and diagnostic information. Some sites have designed and developed methods to post process those logs to extract important information and then report on it to relevant parties. If the logs are retained by your site (see Backup of Logs for details on this process), the consider post processing the logs on a regular basis before they are archived or deleted permanently. One approach is to extract that information from the logs and loading the extracted data into some analysis repository for regular and trend reporting. The diagram below illustrates the process.

19

Oracle Utilities Application Framework - Technical Best Practices

Figure 2 Post Processing Logs

Details of the logs written by the product are documented in the Performance Troubleshooting Guide. Use these guides to determine what data to extract from the logs for post processing.

Check Logs For Errors


One of the most important tasks for a site is to regularly track errors output into logs. Whenever an error occurs in product, an error record is written to the appropriate log for analysis. Some sites regularly check these logs for these errors and using the information in the log, address the error condition.

Figure 3 Filtering Logs

Viewing and checking for errors on a regular basis to quickly reduce the amount of error that may occur can detect trends and common problems. The Performance Troubleshooting Guide outlines the logs and error conditions contained within those logs.

Optimize Operating System Settings


One of the most important configuration settings for product is the operating system itself. The Installation Guide provided with your product highlights the fact that the operating system parameters MUST be set to optimal values for the product to perform optimally. Some sites have experienced large improvements in performance by heeding this advice. Sites that have decided to ignore this advice have experience bad performance till the settings were corrected. Typically, the optimization of the operating system is performed during the implementation and uses the following principles:

20

Oracle Utilities Application Framework - Technical Best Practices

The value of an individual operating system setting is the maximum value of any product on that machine. For example, typically if Oracle or DB2 is contained on a machine, the values for those products are used. The settings used in this way are usually are sufficient for the other products on that machine. If the machine is dedicated for a particular product or tier, then refer to the documentation in the installation guide and the particular vendor's site for further advice on setting up the operating system in an optimal state.

Optimize connection pools


One of the settings that will affect performance is the size of the connection pools at each layer in the architecture. Insufficient pool sizes can cause unnecessary transaction queues that can cause unnecessary delays. Conversely setting the pool sizes too high can cause higher than usual resource usage on a machine also causing adverse performance. So a balance needs to be struck for optimization. During the implementation the size of the connection pools is determined and configured (with relevant growth tolerances) depending on the usage patterns and expected peak/normal traffic levels. The goal, typically, is to have enough connections available at normal traffic levels to minimize queuing and also have the right tolerances to cater for any expected peak periods. Therefore, it is recommended: Set the number of initial connections to the normal number of connections expected. Remember this is not the number of users that are connecting but the expected number of concurrent connections under normal load. Set the tolerances for pool growth (usually a maximum pool size and a connection increment) to the peak load expected at any time. This tolerance will have to be tracked to determine the optimal level. Do not be tempted to set this to a very large value as memory and network bandwidth calculations are usually dependant on the values specified and wastage of resources needs to be minimized.

The product has up to three connection pools to configure: Client connections and Business Server connections These are the number of active connections supported on the Web Application Server from the client machines. Remember that in an OLTP product (such as product) the number of connections allocated is always less that the number of users on the system. It needs to be sufficient to cater for the number of actively running transactions at any given point of time. In Oracle Utilities Application Framework V2.2 and above, it is possible to separate the Web Application Server and Business Application Server. If this configuration is used then it is recommended that the Business Application Server connection pools be set to the same values as the Web Application Server connection pools. Refer to Configuring the Client Thread Pool Size for more information about pool sizing.

21

Oracle Utilities Application Framework - Technical Best Practices

Note: The Client connections and Business Sever connections are managed within the J2EE Web Application Server software. Database connections These are the number of pooled connections to the database. The Framework holds these connections open so that the overhead of opening and closing connections is minimized. For Version 2.x of the product, the number of connections allocated is dictated in each individual web applications hibernate.properties file using c3p0 connection pool (In Oracle Utilities Application Framework V4.0 the connection pooling is now handled by Universal Connection Pool (UCP)).

The figure below illustrates the connection pools available for each version of the Oracle Utilities Application Framework:

Client Client

Client Connections Client Connections


Web Application Server Business Application Server Web Application Server

Business Server Connections


Business Application Server

Database Connections (via Hibernate/c3p0)


Database Server

Database Connections (via Hibernate/c3p0/UCP)


Database Server

V2.x

V2.2 and above

Figure 4 Connection Pools by version of the Oracle Utilities Application Framework

Refer to the whitepaper Server Administration Guides (also known as Operations and Configuration Guides) provided with your product for advice on the configuration and monitoring of the connection pools.

22

Oracle Utilities Application Framework - Technical Best Practices

Read the available manuals


Note: Due to the ISV licensing of Web Application Servers, there may not be as much details as other platforms. Refer to the vendors site for more detailed information. The Oracle Utilities Application Framework product includes a set of documentation that should be downloaded with the software and read as part of the implementation and support of product. The following technical documentation is available on the distribution web: Installation Guide Installation documentation for the base product including supported platforms and required patches. Server Administration Guide/Operations And Configuration Guide Documentation on how to configure and operate the server components of the product. Developer documentation This is detailed documentation on the customization aspects of the implementation including standards for implementations. This includes: Application Logs - List of logs produced by the development and deployment process. COBOL Programming Standards - List of naming conventions and programming advice used for COBOL modules including algorithms, Maintenance Objects etc. Note: COBOL documentation only applies to products that have support for COBOL based customizations. Java Programming Standards - List of naming conventions and programming advice used for java modules including algorithms, Maintenance Objects etc Java Annotations Brief overview of the product annotation classes available to the java developer. Public API Overview of the API available to the java programmer. SQL Programming Standards - Documentation of the SQL standards used in product. HSQL Programming standards - Documentation of the Hibernate SQL standards used in product. User Interface Design Standards - Documentation about the User Interface standards used by product Database Design Standards - Documentation of the database standards employed in product including naming conventions for tables and columns and layout advice. System Table Guide - Documentation of the meta data tables used in the development process. Utilities - Documentation of the other development utilities used by the SDK.

23

Oracle Utilities Application Framework - Technical Best Practices

Development Overview - An introduction to the development process and internals of product. Packaging Utilities - Documentation of the tools provided to package custom builds Key Generation Overview of the routines and tables used for generation of random keys in the product. Application Workbench Overview An overview of the Application Workbench component of the SDK. User Guide A developers cookbook and users guide to the SDK.

Utilities Documentation This is detailed guides to the various tools supplied with product including: Background Processing Details of all the background processes available with the product. Reports Details of the reporting interface available with the product including installation of the algorithm and configuration of the reporting interface. CTI/IVR Integration An overview of the installation, capabilities and configuration of the CTI/IVR integration components delivered with the product. Framework/System Wide Standards An overview of the various UI standards employed by the product. Application Security An overview of the authorization security model used in the product including guidelines for configuration. User Interface Tools An overview of the meta data tools available for the user interface including menus, navigation keys etc. Zone Configuration An overview of how to configure the zones and portals supplied with the product. Database Tools An overview of the Meta data tools available for maintenance object, table and field definition including auditing. Algorithms An overview of all the algorithms supplied with the product. Scripting Details of the Business Process Scripting engine supplied with the product including configuration. Application Viewer Overview of the maintenance and operation of the Data Dictionary and code view supplied with the product. XAI Detailed overview and configuration of the Web Services/XML Application Integration component of the product.

24

Oracle Utilities Application Framework - Technical Best Practices

LDAP Import Detailed overview of the LDAP import function supported by the product to synchronize LDAP information with the authorization information stored in the product.

Batch Operations and Configuration Guide - Details of the configuration settings and common operations for the batch component of product.

Technical Documentation Set Whitepapers available


Apart from the product based documentation there are a number of whitepapers that provide specialist and supplemental information for use during and post implementation. The table below lists the current available technical documentation as well as the Knowledge base Id within My Oracle Support where the documentation resides:
TABLE 5 TECHNICAL WHITE PAPERS DOC ID DOCUMENT TITLE CONTENTS

559880.1

ConfigLab Design Guidelines

A whitepaper outlining how to design, setup and monitor a ConfigLab solution for an implementation. This is a companion document to the Software Configuration Management Series.

560382.1

Performance Troubleshooting Guideline Series

A series of whitepapers outlining the tracking points available in the architecture for performance and a troubleshooting guide based upon common problems.

560401.1

Software Configuration Management Series

This series of documents outlines a set of generic processes (that can be used as part of the site processes) for managing code and data changes. This series includes documents that cover concepts, change management, defect management, release management, version management, distribution of code and data, management of environments and auditing configuration.

773473.1

Oracle Utilities Application Framework Security Overview

A whitepaper outlining the security facilities in the Oracle Utilities Application Framework. A whitepaper outlining the common process for integrating an external LDAP based security repository with the framework. A whitepaper outlining all the various common integration techniques used with the product (with case studies). A whitepaper outlining a generic process for integrating an SSO product with the Oracle Utilities Application Framework. A whitepaper outlining the different variations of architecture that can be considered. Each variation will include advice on configuration and other considerations.

774783.1

LDAP Integration for Oracle Utilities Application Framework based products

789060.1

Oracle Utilities Application Framework Integration Overview

799912.1

Single Sign On Integration for Oracle Utilities Application Framework based products

807068.1

Oracle Utilities Application Framework Architecture Guidelines

836362.1

Batch Best Practices for Oracle Utilities Application Framework based products

A whitepaper outlining the common and best practices implemented by sites all over the world relating to batch. Addendum to Technical Best Practices for Oracle Utilities Application Framework Based Products containing only V1.x

856854.1

Technical Best Practices V1 Addendum

25

Oracle Utilities Application Framework - Technical Best Practices

DOC ID

DOCUMENT TITLE

CONTENTS

specific advice. 942074.1 XAI Best Practices This whitepaper outlines the common integration tasks and best practices for the Web Services Integration provided by the Oracle Utilities Application Framework. 970785.1 Oracle Identity Manager Integration Overview This whitepaper outlines the principals of the prebuilt integration between Oracle Utilities Application Framework Based Products and Oracle Identity Manager used to provision user and user group security information. 1068958.1 Production Environment Configuration Guidelines 1177265.1 What's New in Oracle Utilities Application Framework V4? 1290700.1 Database Vault Integration This whitepaper outlines common production level settings for Oracle Utilities Application Framework products. This whitepaper outlines the changes since the V2.2 release of Oracle Utilities Application Framework. This whitepaper outlines the Database Vault integration available with Oracle Utilities Application Framework V4.1 and above. 1299732.1 BI Publisher Integration Guidelines This whitepaper outlines some guidelines for integration available with Oracle BI Publisher for reporting. 1308161.1 Oracle SOA Suite Integration This whitepaper outlines the integration between Oracle SOA Suite and the Oracle Utilities Application Framework. 1308165.1 MPL Best Practices Addendum to the XAI Best Practices focusing on the Multipurpose Listener. 1308181.1 Oracle WebLogic JMS Integration This whitepaper outlines the integration between Oracle WebLogic JMS and the Oracle Utilities Application Framework for Oracle Utilities Application Framework V4.1 and above. These features are also available for Oracle Utilities Application Framework V2.2 via patches.

This documentation is updated regularly with each release of product with new and improved information and advice. Announcements of updates to whitepapers may be tracked via http://blogs.oracle.com/theshortenspot or http://www.twitter.com/theshortenspot.

Implementing Industry Processes


Implementing a product such as product can mean that an IT has to adopt new processes in the company to cater for the new product in their portfolio of applications. This is not unique to product. Any new product that is implemented into an IT portfolio not only requires business process changes but also IT process changes. In the IT industry at the moment most software application vendors are realizing that implementing a product is not just simply configuration, there are some change management that needs to be performed with the IT group. Luckily the industry has started to adopt some sort of standard framework that helps define an IT business and the processes necessary to run that business. This framework is called the IT Infrastructure Library.

26

Oracle Utilities Application Framework - Technical Best Practices

IT Infrastructure Library (ITIL) is a set of consistent and comprehensive documentation of best practice for IT Service Management. Used by many hundreds of organizations around the world, a whole ITIL philosophy has grown up around the guidance contained within the ITIL books and the supporting professional qualification scheme. ITIL consists of a series of books giving guidance on the provision of quality IT services, and on the accommodation and environmental facilities needed to support IT. ITIL has been developed in recognition of organizations' growing dependency on IT and embodies best practices for IT Service Management. The ethos behind the development of ITIL is the recognition that organizations are becoming increasingly dependent on IT in order to satisfy their corporate aims and meet their business needs. This leads to an increased requirement for high quality IT services. ITIL provides the foundation for quality IT Service Management. The widespread adoption of the ITIL guidance has encouraged organizations worldwide, both commercial and non-proprietary, to develop supporting products as part of a shared "ITIL Philosophy". For more information about ITIL refer to http://www.oracle.com/itil

Using Automated Test Tools


Some of the sites around the world use third party testing tools for performance and regression testing. While product is open in terms of the standard it uses not all test tools are applicable to simulate exact expected traffic. In choosing an automated testing tool that you can use with product the following must be supported: Support for HTTP The automated test tool must be able to trap HTTP traffic, as this is the traffic used by the product. If the tool supports HTTPS, and you intend to use the HTTPS protocol, be careful as support for HTTPS varies greatly with most testing tools. JSP Support product uses JSP coding to perform most functions. A tool that can leverage this technology will enable screens to be recognized. Support simulation of IE caching The product client utilizes the Internet Explorer cache to locally hold an image of the screen for performance reasons. The automated test tools needs to be able to simulate this behavior otherwise results will not reflect reality. Support Pop up screens The product utilizes pop up windows for some lists and some searches as well as confirmation and error messages. The automated test tool needs to be able to support the use of these to adequately simulate product transactions. Valid calls Ensure that the test tools simulate valid calls to the product. A valid call is a call that the browser user interface issues to the web server or a call that the XAI component will accept. An invalid call that is sent by a test tool to the product may result in unpredictable results. Check EVERY call is valid (try them with browser user interface to verify the call) and fix any invalid calls.

The following products have been used with product at customer sites:

27

Oracle Utilities Application Framework - Technical Best Practices

Oracle Application Test Suite (http://www.oracle.com/enterprise_manager/applicationquality-solutions.html ) Borland Silk (http://www.borland.com/us/products/silk/silkperformer/index.html ) Mercury Load center/loadrunner/ ) Runner Performer

(http://www.mercury.com/us/products/performance-

IBM Rational Performance Tester (http://www304.ibm.com/jct03002c/software/awdtools/tester/performance/index.html )

Custom Environment Variables or JAR files


Implementations of the product sometimes use third party java classes or third party tools to perform specialist functions. Sometimes these tools require additional configurations settings that can be integrated in the infrastructure provided by the product. For example, if you use a third party jar file to be called by the product then you will need to add it to the CLASSPATH to ensure it is picked up by the runtime. Luckily, there is a feature that allows custom environment variables settings and other commands to be run after the splenviron.sh script (or splenviron.cmd on Windows) has been executed. To do this create a cmenv.sh script (or cmenv.cmd on Windows) in the $SPLEBASE/scripts directory (%SPLEBASE%\scripts on Windows) with the commands you want to execute. For example, if an implementation used AXIS2 jar files to call web services. Well you place the AXIS2 jar files in a central location (e.g. /axis/lib in this example) and create the cmenv.sh/cmenv.cmd script with the lines:

export CLASSPATH=/axis/lib/axis.jar;$CLASSPATH
or

set CLASSPATH=c:\axis\lib\axis.jar:%CLASSPATH% When splenviron.sh script (or splenviron.cmd on Windows) runs it will look in the scripts directory for the existence of the cmenv.sh script (or cmenv.cmd on Windows) and executes it.
Additional to this, it is possible to do this WITHOUT adding the cmenv.sh script (or cmenv.cmd on Windows). Set the CMENV environment variable to the location of a script, with the above commands contained, BEFORE running any command and splenviron.sh script (or splenviron.cmd on Windows). The CMENV facility is for global changes as it applies across all environments and the cmenv.sh/cmenv.cmd solution is per environment. You can use both as CMENV is run first then cmenv.sh/cmenv.cmd. Note: It is possible, using this technique, to manipulate any environment variable used by the product but this is not recommended.

28

Oracle Utilities Application Framework - Technical Best Practices

Help and AppViewer can be used standalone


The Help and AppViewer components may be used in standalone mode (a.k.a. offline mode). This can be handy for developers, designers and architects who do wish to access up to date information without the need to connect to a live copy of the product at their site. Under the splapp/applications directory on V2.x/V4.x or cisdomain/applications on V1.x, there are two directories named help and appViewer. These contain the online help and AppViewer application and data. Copy these directories to your desired target machine (such as a shared drive, web server or your laptop). Note: On some platforms the directories are contained within WAR files named help.war and appViewer.war, these will need to be decompressed on the target platform using an appropriate utility such as jar from the Java SDK or 7Zip (or similar). To operate the applications in standalone mode you will need to open the following files in your web browser: appViewer.html Application Viewer startup file. It is also possible to reconfigure the behavior of the standalone copy by altering the config.xml file located in the config subdirectory of the AppViewer. SPLHelp.html Help file located in language subdirectory (ENG = English et al).

Note: The AppViewer and Help applications are only supported in the browsers supported by the product.

Re-Register only when necessary


Note: Not all Oracle Utilities Application Framework products have the supplied ConfigLab or Archiving functionality. As part of the ConfigLab definition process it is necessary to register the environments to be used by ConfigLab. The registration process creates remote synonyms (the database technology used to achieve this will vary from database type to database type) and an environment reference in the database. One of the processes that must be performed is that the environments must be re-registered after upgrades (product as well as customization upgrades) as the upgrade may remove or add a new table or view and this needs to be reflected in the synonyms. The registration process does not need to be executed if the product upgrade or customization upgrade does not add or remove any tables or views. This will save time in the upgrade process. If there are no database changes to reflect then the re-register process will remove all synonyms and rebuild exactly the ones removed. This can be a waste of time. Basically remember only to re-register if there are any database table or view additions or deletions. The installation procedure creates a number of default userids with default passwords. It is pertinent to reduce security risk by changing the default passwords and reflecting that change in the configuration.

29

Oracle Utilities Application Framework - Technical Best Practices

Secure default userids


There are a number of default users (and associated default passwords) that are supplied with the installation of product. It is recommended that the default users and their passwords be altered according to the site security standards. The table below lists the default users supplied with the products:
TABLE 6 COMMON SUPPLIED USER CREDENTIALS USERID SCOPE COMMENTS

SPLADM

Default Database Administrator account.

Owner of the database. Database Administrators are the only valid users of this account. This account is created during the database creation process.

SPLREAD

Default Reporting account

This account is used by Archiving, ConfigLab and Reporting. Only available on Oracle database installations. This account is created during the database creation process.

splsys

UNIX administration account

Treat this user as you would treat root or an Administrator account.

SPLUSER

Default Application account

This account is used for all application database access. This account is created during the database creation process.

SYSUSER

Default initial framework user provided with product.

This user needs to be available to add other users. This needs to be defined to the Web Application Server on install. The password will reside in the repository defined in the Web Application Server (usually LDAP).

system

Default User for Web Application server console

This is for Oracle WebLogic implementations only.

WEB XAI

Web Self Service Default User Default XAI userid (some versions)

See SYSUSER See SYSUSER

Note: There are other userids supplied by products used by product, refer to the documentation for the products on these users.

Consider different userids for different modes of access


Note: It is not possible to configure product to use different database accounts for access. All modes of access will share the relevant pool of database connections as a single database user (usually SPLUSER). In Oracle Utilities Application Framework version 4.0.1 and above the actual end userid is available as the CLIENT_IDENTIFIER on Oracle database sessions. By default the application is configured to either use SYSUSER, SPL or XAI to access the product for online, XAI and in background processing. This means any audit or staging records are associated with a common userid. Some implementations have created additional userids to use as a filter for reporting, traceability and auditing purposes. The following guidelines may be used in this area:

30

Oracle Utilities Application Framework - Technical Best Practices

Create a different userid for XAI transactions. This allows tracking of XAI within the architecture. It is also possible to assign each transaction in XAI a different userid, as it is passed as part of the transaction but usually most customers consider this overkill. Create a different userid for each background interface. This allows security and traceability to be tracked at a lower level. Create a generic userid for mainstream background processes. This allows tracking of online versus batch initiation of processes (especially To Do, Case and Customer Contact processing).

Note: Remember that any product user must be defined to the product as well as the authentication repository.

Dont double audit


The product has an auditing facility that is soft configured. The facility can be enabled by configuring the auditing parameters (location of the audit data, audit rules etc) against the meta data definitions of the tables. This ensures that any online or XAI updates are audited according to those rules. Auditing is used to track online changes to critical entities. The financial component of product already has a separate auditing facility, as all customers generally require it. Any changes to financial information such as payments, adjustments, bills etc are registered in the Financial Transaction tables. Therefore enabling auditing on those entities is not required and constitutes double auditing (i.e. auditing information is stored in two places). While the impact of the double auditing may be storage related, enabling auditing on bills, for example, can have a performance hit on online bills. Customers with large numbers of bill segments per bill (i.e. several hundred) have experienced negative performance impact during online billing when double auditing is enabled on financial entities. This does not affect batch performance as auditing is not used in batch.

Use Identical Machines


The flexibility of the technology used by the product allows the ability to mix-and-match different hardware for a configuration. While this may be attractive and allow for some innovative solutions, it makes overall manageability and operations harder. Hence, it should be avoided. Having identical hardware allows for ease of stocking spare parts, better reproducibility of problems (both software and hardware), and reduces the per platform testing cost. This cost, in many cases, will surpass the savings from reusing existing disparate hardware.

Regularly restart machines


It is generally a good practice to restart servers periodically. This recovers slow memory leaks, temporary disk space build-up, or other hidden problems that may manifest themselves only when the server is up for such a long duration. This is a simple way to avoid unexpected or unexplained failures.

31

Oracle Utilities Application Framework - Technical Best Practices

Most hardware vendors have recommendations on optimal time intervals to restart machines. Some vendors even "force" the issue for maintenance reasons. Check with your vendor for specifics for your platform.

Avoid using direct SQL to manipulate data


Note: Issuing SQL data manipulation language (DML) statements other then SELECT statements directly against base tables can cause data integrity to be compromised and can invalidate your product warranty. All data update access should be through maintenance objects that ensure data integrity is maintained. Unless the outcome can be verified as correct, you should not use ANY direct SQL statement against product database as you may corrupt the data and prevent the product from operating correctly. All the data maintenance and data access in the product is located in the Maintenance Objects. The Maintenance Objects validate ALL changes against your sites business rules and the rules built into the product. If you are using the objects to manipulate the data then integrity is guaranteed as: All the validations including business rules, calculations and referential integrity are contained within the Maintenance Objects. The Maintenance Object performs a commit when all validations are successful. If any validation is failed the whole object is rolled back to a consistent state. In background processing, a commit is performed after a number of Maintenance Objects have been processed (known as the Commit Interval). At that point the last commit point is registered on the Batch Control for restarting purposes. If the background process fails between commit points, the database is rolled back to the last commit point. All access modes (online, XAI, background processing) from code supplied with the product use the Maintenance Objects for processing. This means that integrity is guaranteed across all modes. Any customizations (algorithms etc) using the Oracle Utilities SDK will should be using the Maintenance Objects.

Using incorrect SQL may violate any of the validations and even make the system unusable. If you have to manipulate data within the product, use one or more, of the following provided methods: The browser user interface. XML Application Integration. Conversion Toolkit. Software Development Kit.

Minimize Portal Zones not used

32

Oracle Utilities Application Framework - Technical Best Practices

In the Oracle Utilities Application Framework, portals were introduced to all sites to decide what zones and their sequence should for different user groups. For performance reasons, it is recommended that you configure portal preferences to collapse zones that are not needed every time a portal is displayed. The system does not perform the processing necessary to build collapsed zones until a user expands the zone, so configuring them as initially collapsed improves response times. This is especially relevant for the To Do zones that may take a while if the number of To Do records is excessive.

Routine Tasks for Operations


After the implementation of the product has been completed there is a common set of tasks that IT groups perform to maintain the system. The table below lists these tasks:
TABLE 7 ROUTINE TASKS TASK COMMENTS

Perform Backups

Perform the backup of the database and file system using the site procedures and the tools designated for your site.

Post Process Logs

Check the log files for any error conditions that may need to be addressed. Refer to Post Process Logs and Check Logs For Errors for more details.

Process Performance Data

Collate and process day's performance data to assess against any Server Level targets. Identify any badly performing transactions.

Perform Batch Schedule

Execute the batch schedule agreed for your site. This will include overnight, daily, hourly and adhoc background processes.

Rebuild Statistics

DB2 and Oracle require the database statistics for the product schemas to be rebuilt on a regular basis so that the access to the SQL is optimized. At DB2 sites, a rebind is also required to reflect the changes in the execution plan/packages.

File Cleanup

On a regular basis, the output files from the background processes and logs will need to be archived and removed to minimize disk space usage.

Archive Data not required

The Oracle Utilities Application Framework features an inbuilt archiving facility that can transfer transaction data not considered required for online processing to another environment, to a file or simply deleted. Refer to Archiving for more details.

Run Cleanup Batch Jobs

There are a number of background processes that remove staging records that have been already successfully processed. Refer to "Removal of Staging Records" for more details.

Note: The tasks listed above do not constitute a comprehensive list of what needs to be performed. During the implementation you will decide what additionally needs to be done for your site.

Typical Business Day


One of the patterns experienced at sites is the notion of a common definition of a business day. Typically during the implementation the business day is defined for planning purposes. It defines when the call center is at peak or non-peak, background processing can be performed and when backups are performed during the business day. The figure below illustrates a simplified model of a typical customer business day:

33

Oracle Utilities Application Framework - Technical Best Practices

Monitoring Batch Backups Online 0


Off Peak Overnight Batch Backup

Monitoring Daily/Ad-hoc/Hourly Batch Backup Overnight Batch

Peak
8 12 16

Off Peak

20

Figure 5 Example Typical Business Day

Note: The above diagram is for illustrative purposes only and could vary for your site. Typically a business day contains the following elements: There is a peak online period where the majority of call center business is performed. Typically this is performed in business hours varying according to local custom. There is a call center off peak period where the volume of call center traffic is greatly reduced compared to the peak period. Typically in call centers, which operate 24x7, this represents overnight and weekends. At this time the call center is reduced in size (usually a skeleton shift). Some sites do not operate in non-peak periods and rely on automated technology (e.g. IVR) to process transactions such as payments etc. Backups are either performed at the start of the peak period or the end of the peak period. The decision is based upon risk around failure of the background processing and its risk to the impact of online processing. The product specific background processes can be run anytime but avoiding them during peak time will maximize the available computing resources to the successful processing of call center transactions. The backup at the end of the peak period is the most common patterns amongst product customers. Background processes are run at both peak and off peak times. The majority of the background processing is performed at off-peak times to maximize the computing resources to the successful completion of the background processing. The background processing that is run at off peak times is usually to check ongoing call center transactions for adherence to business rules and process interface transactions ready for overnight processing. Monitoring is performed throughout both peak and off peak times. The monitoring regime used may use manual as well as automated tools and utilities to monitor compliance against agreed service levels. Any non-compliance is tracked and resolved.

The definition of the business day for you site is crucial to schedule background processing and set monitoring regimes appropriate for the traffic levels expected.

Login Id versus Userid


Note: This facility is available in Oracle Utilities Application Framework V4 and above only. In the past releases of the Oracle Utilities Application Framework the userid that could be used to login was restricted to 8 characters in length. In Oracle Utilities Application Framework V4 and above, it is possible to use a user identifier of up to 256 characters in length.

34

Oracle Utilities Application Framework - Technical Best Practices

In Oracle Utilities Application Framework V4 and above the concept of a Login Id is supported. This attribute is the used by the framework to authenticate the user. For backward compatibility the 8 character userid field is still used for auditing purposes internally. Therefore both Userid and Login Id should be populated. They can be different or the same values. The Login Id can be set manually, via Oracle Identity Manager or set in a class extension to auto generate a value.

Figure 6 Login Id

Hardware Architecture Best Practices


Note: There is a more detailed discussion of effective architectures in the Oracle Utilities Application Framework Architecture Guidelines whitepaper. Refer to that document for further advice. The product can be run on various combinations of hardware based architectures. When choosing an architecture that is best suited to a site there are a number of key factors that most be considered: Cost When deciding a preferred architecture, the total cost of the machine(s) and infrastructure needs to be taken into a consideration. This should the ongoing costs of maintenance as well as power costs. IT Maintenance Effort When deciding a preferred architecture, the manual or automated effort in maintaining the hardware in that architecture needs to be factored into the solution. Availability One of the chief motivations for settling on a multi-machine architecture is requiring the architecture to support high availability. When deciding a preferred architecture, the tolerance and cost of availability needs to be factored into the solution.

Single Server Architecture

If the site is cost sensitive and/or the availability requirements allows it, then having all the architecture on a single machine is appropriate. This is known as the single server architecture. This configuration is popular with some sites as: The cost of the hardware can be minimal (or least very cost effective). Maintenance costs can be minimized with the minimal hardware. Virtualization software (typically part of the operating system or third party virtualization software) can be used to partition the machine into virtual machines.

The one issue that makes this solution less than ideal is the risk of unavailability due to hardware failure. Customers that choose this solution, typically address this shortcoming by buying a second

35

Oracle Utilities Application Framework - Technical Best Practices

machine of similar size and using it for failover, disaster recovery as well as non-production. In essence, if the primary hardware fails then the backup machine assumes the responsibility for production till the hardware fault is resolved. In this case, additional effort is required to keep the secondary machine in synchronization with the primary. The diagram below illustrates the single server architecture:
Browser Client

Web Application Server Business Application Server Database Server


Figure 7 Example Single Server Architecture

Simple Multi-tier Architecture

One of the variations on the single server architecture is the "simple multi-tier architecture". In this hardware architecture, the database server and Web Application Server/Business Application Server are separated on different machines. For product V1.x customers, you can also separate the Web Application and Business Application Servers. This is chosen by customers who want to optimize the hardware for the particular tier (settings and size of machine) and therefore separate the maintenance efforts for each server. For example, Database Administrators need only access the Database Server to perform their duties and set the operating system parameters optimized for the database. Unfortunately the solution can have a higher cost than the single server solution and still does not address the unavailability of any machine in the architecture. Customers that have used this model adopt a similar solution to the single server architecture (duplicate secondary machines at a secondary site) but also have the option of having both machines in the architecture being the same size and shifting the roles when availability is compromised. For example, if the database server fails, the Web Application Server can be configured to act as a combination of the Database Server and Web Application Server. The figure below illustrates the Simple Multi-Tier Architecture:

36

Oracle Utilities Application Framework - Technical Best Practices

Browser Client Browser Client

Web Application Server Web Application Server Business Application Server

Business Application Server

Database Server

Database Server

Figure 8 Variations of the Single Multi-Tier Architecture

Machines in this architecture can be the same size or different sizes depending on the cost/benefits of the various variations. Typically customers use a smaller machine for the Web Application Server as compared with the database server.
Multiple Web Application Servers

To support higher availability for the product, some sites consider having multiple Web Application servers. This allows online users to be spread across machines and in the case of a failure be diverted to the machine that is available. To achieve this, the site must use a load balancer (see "Load balancers" discussion later in this document). At the time of failover the load balancer will redirect traffic to the available server. This is made possible as the product is stateless. The Web Applications Servers are either clustered or managed. Refer to the discussion in the Clustering or Managed? section of this document for advice. This architecture is quite common as it represents flexibility as one of the Web Application Servers can be dedicated to batch processing in non-business hours making the architecture more cost effective. Typically the Web Application Server software is shutdown to allow batch processing to use the full resources of the machine while allowing users (usually a small subset) to process online transactions. The only drawbacks with this solution are a potential higher cost than a multi-tier solution and the potential impact of database unavailability. Customers that use this architecture overcome the potential unavailability of the database by either using a secondary site to act as the failover or using one of the Web Applications in a failover database server role. The latter is less common, as most customers find it more complex to configure, but is possible with this is a possibility with this architecture. The figure below illustrates the Multi-Tier Architecture:

37

Oracle Utilities Application Framework - Technical Best Practices

Browser Client

Load Balancer

Web Application Server

Web Application Server

Business Application Server

Database Server

Figure 9 Example Multi-Application Server Architecture

Machines in this architecture can be the same size or different sizes depending on the cost/benefits of the various variations. Typically customers use a smaller machine for the Web Application Server as compared with the database server.
High Availability Architecture

The most hardware intense solution is where all the tiers in the architecture have multiple machines for high availability and distribution of traffic. The solution can vary (number of machines etc) but have the following common attributes: There is no single point of failure. There is redundancy at all levels of the architecture. This excludes redundancy in the network itself, though this is typically out of scope for most implementations. The number of servers will depend on segmentation of the traffic between call centers, noncall centers, interfaces and batch processing. It is possible to reuse existing servers or setup dedicated servers for different types of traffic. Availability can be managed with either hardware based solutions, software based solutions or a valid combination of both. The number of users will dictate the number of machine to some extent. Experience has shown, that a large number of users tend to be better served, from a performance and

38

Oracle Utilities Application Framework - Technical Best Practices

availability point of view, by multiple machines. Refer to the What is the number of Web Application instances do I need? for a discussion on this topic. The Web Applications Servers are either clustered or managed. Refer to the discussion in the Clustering or Managed? section of this document for advice. Database clustering is typically handled by the clustering or grid support supplied with the database management system.

This solution represents the highest cost hardware from both hardware and a maintenance perspective. Historically customers with large volumes of data or specific high availability requirements have used this solution successfully. The figure below illustrates the High Availability Architecture:

Browser Client

Load Balancer

Web Application Server

Web Application Server

Load Balancer Business Application Server Business Application Server

Database Server/Cluster

Figure 10 Example High Availability Server Architecture

Failover Best Practices


Failover occurs when a server in your architecture becomes unavailable due to hardware or software failure. Immediately after the failure the active components architecture route the transactions around the unavailable component to an alternative or secondary component (on another site) to maintain a level

39

Oracle Utilities Application Framework - Technical Best Practices

of availability. This routing can be done automatically through the use of high availability software/hardware or manually by operators. The Oracle Utilities product architecture supports failover at all tiers of the architecture, using either hardware or software based solutions. Failover solutions can be varied but a few principles have been adopted successfully by existing customers: Failover solutions that are automated are preferable to manual intervention. Depending on the hardware architecture used the failover capability can be automated. Availability goals play a big part in the extent of a failover solution. Sites with high availability targets tend to favor more expensive, comprehensive hardware and software solutions. Sites with lower availability (or no goals) tend to use manual processes to handle failures. Failover is built into the software used by the products (though it may entail an additional license from the relevant vendor). For example, Web Application Server vendors have inbuilt failover capabilities including load balancing, which is popular with customers. Hardware vendors will have failover capabilities at the hardware or operating system level. In some cases, it is an option offered as part of the hardware. Sites use the hardware solution in combination with a software based solution to offer protection at the hardware level. In this case, the hardware solution will detect the failure of the hardware and work in conjunction with the software solution to route the traffic around the unavailable component. Failover is made easier to implement for the product as the Web Application is stateless. The users only need connection to the server while they are actively sending or receiving data from the server. While they are inputting data and talking on the phone they are not consuming resources on the machine. For each transaction the infrastructure routes the calls across the active components of the architecture. At the database level the common failover facility used is the facility provided by the database vendor. For example, Oracle database customers typically implement RAC. Failover configuration at the database is the least used by existing sites, as the cost of having additional hardware is usually prohibitive (or at least not cost configurable). Sites wanting to have failover and disaster recovery but cannot afford both consider a solution which combines both. In this case, the disaster recovery configuration is used as a failover for non-disasters. For any failover solution to be effective, the site typically analyses all the potential areas of failure in their architecture and configures the hardware and software to cover that eventuality. In some cases, sites have chosen NOT to cover eventualities of extremely low probability. Using hardware Mean Time between Failure (MTBF) values from hardware vendors can assist in this decision.

When designing a failover solution then the following considerations are important: Determine what the availability goals are for your site.

40

Oracle Utilities Application Framework - Technical Best Practices

Determine the inbuilt failover capabilities of the hardware and software that your site is using. This may reduce the cost of implementing a failover solution if it is already in place. List all the components that need to be covered by a failover solution. Review the list to ensure all aspects of "what can fail?" are covered. Design your failover solution with all the above information in mind that you can automate (within reason) for your site. Ensure the solution is simple and reuses already available infrastructure to save costs. Commonly sites use the following failover techniques in the architecture:
TABLE 8 COMMONLY USED FAILOVER TECHNIQUES TIER COMMON FAILOVER SOLUTION

Network

Load Balancer (hardware for large numbers of users; software based for others). Consider redundant load balancers for "no single point of failure" requirements.

Web Application Server/ Business Application Server Database Server

Use inbuilt clustering/failover facilities unless load balancer is doing this. Consider hardware solutions for batch or interface servers. Use inbuilt failover facilities in database unless hardware solution is more cost effective.

Online and Batch tracing and Support Utilities


The Oracle Utilities Application Framework provides a set of utilities to aid in capturing information for support. Refer to My Oracle Support Doc ID 1206793.1 (Master Note for Oracle Utilities Framework Products - Online and Batch tracing and Support Utilities) for details and training on using these utilities to provide critical information to help expedite support requests.

General Troubleshooting Techniques


Whilst the troubleshooting features of the product are documented in detail in the online help, Performance Troubleshooting Guides and other manuals there are a number of techniques and guidelines that can be used to help identify problems: Check the logs in the right order The log files are usually the best spot to look for errors as any error is automatically logged to then by the product. The most efficient method is to look for the logs from the bottom up as if the error appears in the lower ranks of the architecture that is more likely where the error occurred 5. Typically you look for records of type ERROR in the following logs (located in $SPLEBASE/logs/system):
TABLE 9 COMMON LOG FILES

The theory is that the first place the error occurs is the most likely candidate tier.

41

Oracle Utilities Application Framework - Technical Best Practices

LOG FILES

COMMENTS

spl_service.log

Business Application Server log. In some versions of the Oracle Utilities Application Framework this log does not exist as it is included in the spl_web.log. Errors in here can be service or database related.

spl_xai.log

Web Services Integration also known as XML Application Integration (XAI) log. This log file is exclusively used for the XAI Servlet. More detail can exist in the xai.trc file if tracing is enabled.

spl_web.log

Web Application Server log. This is typically where errors from the browser interface are logged. If errors are repeated from the spl_service.log then the issue is not in the Web Application Server software but in the Business Application or below.

Note: There are other logs that are related to the J2EE Web Application Server used that exist in this directory or under the location specified in the J2EE Web Application Server First error message is usually the right one When an error occurs in the product, it can cause other errors. Usually the first occurrence of any error is usually the root cause. This is more apparent when a low level error occurs which ripples across other processes. For example, if the database credentials are incorrect then the first error will be that the product cannot connect to the database but other errors in the product will appear as meta data cannot be loaded into various components. In this case fixing the database error will correct the other errors as well. Not all errors are in fact errors The product will issue errors if components are missing but are able to overcome this issue. For example, if meta-data is missing the system may resort to using default values. In most cases this means the product can operate without incident but the cause should be resolved to ensure correct behavior. Note: In some versions, such errors are reported as a WARNING rather than an ERROR. Tracing can help find the issue The products includes trace facilities that can be enabled to help resolve the error. This information is logged to the logs above (and other server logs) that can be used for diagnosis as well as for support calls. Refer to Online and Batch tracing and Support Utilities for more information about these tools. There are usually a common set of candidates When an error occurs there are a number of typical candidates for causing issues: Running out of resources The product uses the resources allocated to it that are available on the machine. If some capacity is reached in the physical machine (memory or disk space are typical resource constraints) or logical, via configuration, such as JVM memory allocations, then the product will report a resource issue. In some cases, the product will directly report the problem in the logs but in some case it will be indirectly. For example, if the disk space is limited then a log may not be written which can cause issues. Incorrect configuration If the product configuration files or internal configuration are incorrect for any reason, they can cause errors. A common example

42

Oracle Utilities Application Framework - Technical Best Practices

of this is passwords which either are wrong or have expired. File paths are also typical settings to check. Missing metadata The product is meta-data driven. If the metadata is incorrect or missing then the behavior of the product may not be as expected. This can be hard to detect using the usual methods and typically requires functionality testing rather than technical detective work. Out of date software All the software used in the solution, whether part of the product or infrastructure, has updates, patches and upgrades to contend with. Upgrading to the latest patch level typically can address most issues.

Refer to the Performance Troubleshooting Guides for more techniques and additional advice.

Data Management Best Practices


Once a product has been put into product one of the issues that needs to be managed is the quantity of data that accumulates over time. While storage is relatively cheap, as compared to the past, maintenance of an optimal amount of storage is both cost effective and maintains a stable level of performance. Data management techniques used with products varies according to the types of data stored within the product.

Respecting Data Diversity


One of the most important considerations for a site is to respect the diversity of the data contained in the product where you are trying to manage the data from. Different types of data require different types of management. Requirements for managing data are typically driven by business practices, industry practices or even government legislation (typically driven by tax requirements). Products are typically is divided into a number of data types and each of these data types needs to be managed in the database for a varying length of time as the product typically has different uses for them. In most products the data types can be categorized as follows:
TABLE 10 DATA DIVERSITY TYPES DATA TYPE TYPICAL COMPOSITION TYPICAL MANAGEMENT

Configuration Data (a.k.a. Administration data)

Data driving the configuration of the product (e.g. Menus, rates, security, reference data etc).

Maintained by a subset of individuals. Kept indefinitely and only represents small part of any database. Maintained by end users. Kept indefinitely but can be driven by government legislation such as privacy laws or industry rules.

Master Data

Data pertaining to customers/taxpayers such as personal records, addresses, account information, contracts, etc)

Transactional Data

Day to day data relating to any interaction or activity against the Master

Data is still active is retained for operational reasons. Historical data is

43

Oracle Utilities Application Framework - Technical Best Practices

DATA TYPE

TYPICAL COMPOSITION

TYPICAL MANAGEMENT

Data (e.g. Bills, Cases, payments, contacts etc).

deleted or archived according to business rules or government legislation.

The table above illustrates the various differences between the types of data and their usual data retention rules. During an implementation and post implementation, you must be aware of the data types and then plan the data retention rules accordingly.

Archiving
Note: The Archive Engine is only available for selected Oracle Utilities Application Framework based products. Refer to your product documentation to verify its validity. One of the most used techniques of managing data is Archiving. The idea is that you only have data in your database that is actively needed and any additional data is either archived to another place or simply deleted. The processing therefore is optimized against the active data, without having to ignore records no longer needed for processing. Archiving is usually associated with transaction data as it typically has a limited live. Data is kept according to business practices or government regulations (especially around taxation records retention). Most customers keep a number of years of active transaction data and archive any data past the activity date. The key to archiving is to know what to archive and to ensure that archiving that data does not violate a business rule or compromise integrity of the overall system. Therefore most of the activity in archive planning, is identifying the data to archive (transactional data), the criteria in which the data becomes valid for archive and what form the archive is going to take (another database, file or microfiche). Determining the data to archive is an important first step. Typically transactional data is a candidate but there may be circumstances where master data is also archived. For example, you might archive customer records if the customer becomes deceased. The following types of tables are ideal candidates for archiving: Transaction Tables with large amounts of records Archiving such tables can have double gains. First of all you are removing records that have to be ignored by the processing and also you may be freeing up valuable space as you reduce the sizes of the tables. Transient data Typically data that is included into interfaces may be loaded into tables prior to loading into the main transaction tables. This is known as staging. It is a common technique as validation can be executed against the staging area and only validated data passed to the transaction data. This separates invalid data from valid data. Invalid records are kept in the staging area until they are resolved. The only issue then is that records that are valid must be removed from the staging area on a regular basis. A common principal in records retention is if you can get a record from someplace else then you can remove one of the copies. For example, if you print off an email, you still have a record of it, therefore you can delete the electronic copy or destroy the physical copy, you do not need both. This principle applies to

44

Oracle Utilities Application Framework - Technical Best Practices

the staging area, where the valid records are already in the transaction tables so they can be safely removed from the staging area. The only exception to this principal is where the business process or regulation requires you to keep both. Living data Data pertaining to living customers needs to be retained for processing but if you work in a deregulated market where you must surrender details of customers as part of the process of transferring them to a competitor (a.k.a. losing a customer) then they may become candidates for archiving. The validity of this case may vary according to business practices or regulations. The same principle can be applied to customers who become deceased. What data, if any, do you retain when a customer dies? Does the data become a candidate for archival?

Once the data to be archived is determined the next step is to identify the criteria that will be used to identify the data is valid to archive. Usually, archive criterion are time based (e.g. older than x months) but can be quite sophisticated. The criteria will be set by business processes or government legislation but there are a few additional criteria that need to also consider: Active Data If a record satisfies the time criteria but is somehow still active then it is not eligible for archival. For example, if a payment is older than business rules recommend, but is in dispute for some reason then it cannot be archived. Integrity When archiving data, no integrity rule (referential or otherwise) should be broken. You must guarantee that archiving of this record will not adversely affect other records in the system or even prevent the system from operating. For each record deletion, any related tables should be examined to see if any condition prevents the deletion (or they should be covered in the archive as well this is known as a cascade archive). De-archive - One of the major misconceptions about archiving in a data management aspect is the ability to return data from an archive (a.k.a. de-archived). Not all archiving facilities do this as typically the space saving benefits of archiving are somewhat diluted by this ability and the overhead of re-integrating the archived data into active data can be quite difficult and messy. The best advice is to avoid this situation altogether and ensure the criteria used covers data is not going to be de-archived.

Note: The current version of the archive facility inbuilt in the Oracle Utilities Application Framework does not support de-archival. Once the data to be archived has been identified and the criteria agreed and implemented then the format of the archive needs to be taken into account. There are a number of options that can be considered: Using the Archive facility If there is a requirement for the data to made available online but not active in the system, then consider setting using the inbuilt archiving facility provided in the Oracle Utilities Application Framework . File based If there is NO requirement to view the data but make it available to offline viewers (such as data loaders or even microfiche) then archiving data to a sequential file for

45

Oracle Utilities Application Framework - Technical Best Practices

reference is alternative. This data can then be archived to a tape or to a location that can be retrieved and viewed at a later stage. The format of the file is site based but can be as simple as a database export or as complex as formatted fixed format multi-record data files. The archive facility provided with the Oracle Utilities Application Framework provides a facility to archive the data to a file. Purge Only One of the most common archiving techniques is to simply delete the data from the database. This is applicable to all techniques eventually (you cannot keep the data forever). In this technique the records identified to be archived (passing the criteria) are simply deleted from the database to release space.

Archiving data on a regular basis can remove inactive data from your data which may improve performance and save disk space. Generally, customers run archiving processes at least once a week.

Data Retention Guidelines


One of the most common requirements that must be considered during an implementation and post implementation is how long to retain data in the active production database. Even though disk space is becoming cheaper over time, there is always a cost based limit to how much should be stored. Typically the customer's business practices that dictate the amount of historical data stored in the database at any time. Therefore there are a number of key factors that govern data retention: Government legislation Most countries have a legal requirement to have information available in a computer system. Typically this requirement separates how much should be active and how much should be retained in a passive medium (e.g. archive or available in a backup format). Business requirements - There is usually a business requirement to work on historical data. For example the business may need to be able to process financial data over a number of years. This requirement typically dictates the amount of historical data kept. Physical capacity of the hardware At the end of the day any machine used for any software has a physical limit. This limit is usually based upon business requirement and cost to the business. Table Identifiers All tables in the Oracle Utilities Application Framework based products have identifiers (some have multiple). The physical key size can be an indicator of the limit of the records that can be kept. It should be noted that most of the Oracle Utilities Application Framework based products have designed their key sizes to cover the majority of expected data cases in the field. Audit requirements Typically, each site will have some sort of auditing function, within the company or an independent auditing firm. This auditing capability that will expect a certain amount of historical data, directly or indirectly in the product, to adequately operate an audit. This requirement is usually forgotten by most sites until they need it. During an

46

Oracle Utilities Application Framework - Technical Best Practices

implementation, or soon after, the audit requirements should be clarified and factored into any data retention policy. It should be noted that the product's themselves do not impose any particular data retention policy. Data Retention tends to apply to specific data types only: Transactional data is subject to data retention rules as it is the data that grows over time. Master data tends to remain in the database for the life of the system, even in a deregulated market, for fraud prevention purposes. Meta-Data is not covered by data retention policy as it needs to be there to make the product operate so is rarely archived or removed Configuration data will vary, as it is wide ranging, but generally is also rarely archived or removed.

In terms of their platform, customers should monitor the data growth to reach a decision about archiving, if they wish to do so, or simply removing the data. Typically the status of a record in the staging tables used for interfaces becomes Complete then it becomes redundant data. The data will be reflected in the main product tables and is not required in the staging tables anymore. Removal of completed records, on a regular basis, can have storage benefit as well as performance benefit.

Removal of Staging Records


The product uses a staging concept for most of the major interfaces. This involves a process, known as Process X, to load the staging tables and then a base product background process is run to validate and copy the valid staging data into the relevant main tables. When records are loaded initially, the status of the records is set to Pending indicating they are ready to process. Once the relevant base product background process processes them, then the status is changed to either Completed (for valid records) or Error (for invalid records). Invalid records can be corrected using the relevant staging online query to manually resolve the error. This is summarized in the figure below:

47

Oracle Utilities Application Framework - Technical Best Practices

Product

update status Product Process

Process X load Staging Source System


Pending Errored Complete

validate Main tables

Input Staging Product


Product Process Main tables

update status Process X extract Staging (run nbr)


Pending Errored Complete

extract Target System

Output Staging
Figure 11 Staging Process Overview

It is assumed that completed staging records are no longer required, after a period of time, as the data they contain has been reflected have been reflected in the main tables. There is no business reason to keep completed staging records after they have been completed for long periods of time. Regular cleanups of the staging tables to remove completed records will have great performance benefits on interfaces. Successful sites run the provided purge jobs to improve performance and reduce disk space usage. To decide when to run these purge jobs and what parameters to pass to them the following is recommended: Work out with the business at the site how long they wish to retain the number of completed records. You can stress to them that NO important data is lost in purging completed records as their data is reflected in main tables. The value is used for the NO-OF-DAYS batch parameter passed to the job. The value is the number of days not the number of business days (e.g. A value of 14 for NO-OF-DAYS means 2 weeks). For the To Do Purge job, there are additional parameters to decide the specific To-Do type to purge or ALL (DEL-TD-TYPE-CD and DEL-ALL-TD-SW). Work with the business to decide if this job is to be run once (for all To Do types) or multiple times for each To-Do Type. Successful customers run it to delete all To Do types to reduce the number of jobs to run.

48

Oracle Utilities Application Framework - Technical Best Practices

Decide the frequency based upon data growth of each table. Ideally these purge process should be run each business day at the end of the nightly batch schedule to keep the optimum but should be run once a week at a minimum.

Partitioning
One of the most popular data management techniques is the use of partitioning on tables. Partitioning enables tables and indexes to be split into smaller, more manageable components. Partitioning allows a table, index or index-organized table to be subdivided into smaller pieces. Each piece of database object is called a partition. Each partition has its own name, and may optionally have its own storage characteristics, such as having table compression enabled or being stored in different tablespaces. From the perspective of a database administrator, a partitioned object has multiple pieces which can be managed either collectively or individually. This gives the administrator considerably flexibility in managing partitioned objects. However, from the perspective of the product, a partitioned table is identical to a non-partitioned table; no modifications are necessary when accessing a partitioned table using SQL. Partitioning has known benefits: Divide and Conquer - With partitioning, maintenance operations can be focused on particular portions of tables. For example, a database administrator could back up a single partition of a table, rather than backing up the entire table. For maintenance operations across an entire database object, it is possible to perform these operations on a per-partition basis, thus dividing the maintenance process into more manageable chunks. Parallel Execution of SQL Most databases will sense that the table is partitioned and run SQL statements (including SELECT and INSERT statements) in multiple threads. Each of the partitions can be thought of as an individual table and the database uses this. Pruning Queries operating on one partition can run substantially faster due to reduced size of the data to search. Partition Availability - Partitioned database objects provide partition independence. This characteristic of partition independence can be an important part of a high-availability strategy. For example, if one partition of a partitioned table is unavailable, all of the other partitions of the table remain online and available; the product can continue to execute queries and transactions against this partitioned table, and these database operations will run successfully if they do not need to access the unavailable partition.

When using partitioning you should ensure that major processes accessing the table do not cross partition boundaries. Crossing from one partition to another can cause slight delays as physically the table has been separated into individual files per partition. This situation may be avoided when designing the partitioning regime for the table.

49

Oracle Utilities Application Framework - Technical Best Practices

The key to success to partitioning is recognizing which tables are candidates for partitioning and what partitioning scheme to use. Partitioning must be planned and designed into a database to ensure that the partitioning regime is optimal for your products. The ideal candidates for partitioning are large tables with a small number of indexes. The benefits of partitioning are optimal for large tables rather than applying the principles across all tables. The minimal number of indexes is a criterion to minimize the likelihood of crossing partition boundaries in SQL. Once the tables are chosen to be partitioned then the next step is to decide the number of partitions to implement. The rule of thumb is to choose the number of partitions so that any SQL that accesses the table using the indexes will minimize crossing partition boundaries. If your product is multi-threaded then each thread of the process needs to remain within a partition. In this case the number of partitions should be equal to the number of threads (or a divisor). For example, if a major process runs in 10 threads then the number of partitions could be 10, 5 or 2. Each of the numbers ensures that each thread stays within a partition. Once the number of partitions is chosen the next step is to decide which partition scheme you can use. Database vendors have implemented numerous ways of dividing a table into partitions. Each of these schemes (and sometimes combination) tells the database how to split the data into the various partitions as well as how to access the partitions. The most common partitioning scheme used is known as range partitioning where a range of values (index based) is used to designate the partition a record is placed within. Refer to the partitioning documentation provided by your database vendor for details of all the different schemes that can be used to partition your table data. Table partitioning represents the easiest method of data management and is usually the first data management technique used before other techniques are considered.

Compression
Note: Database level compression varies from one database vendor to another. In some cases, it is included as an optional component of the database and in other cases, it is a separate piece of software that must be obtained from the database vendor (or an approved third party). A technique that is starting to emerge from the database vendors is compression of data. This can be done at a database level (global) or a table level and typically requires no changes to a product to implement. As the data is stored and retrieved it is compress and decompressed before passing back to the product. As far as the product is concerned it is unaware that the data is compressed or not. This appeals to database administrators as they can experiment with compression without the need to involve the product developers. Database systems have not heavily utilized compression techniques on data stored in tables. One reason is that the trade-off between time and space for compression is not always attractive for databases. A typical compression technique may offer space savings, but only at a cost of much

50

Oracle Utilities Application Framework - Technical Best Practices

increased query time against the data. Furthermore, many of the standard techniques do not even guarantee that data size does not increase after compression. Over time, database vendors have addressed the trade-off by implementing unique compression techniques. It has come to a stage where virtually no negative impact on the performance of queries against compressed data; in fact, it may have a significant positive impact on queries accessing large amounts of data, as well as on data management operations like backup and recovery. Each database vendor will supply guidelines to effectively use of compression to minimize any overhead for all SQL statements (including INSERTs, UPDATEs etc) and which tables are the best candidates for compression. Note: Not all tables in Oracle Utilities Application Framework based products will benefit from compression as the database vendors have imposed efficiency rules that may preclude specific tables.

Database Clustering
One of the more advanced features that have emerged as a valid data management technique is the ability for databases to be clustered. This is a relatively new technique for data management, as most people associate clustering with availability rather than management of data volumes. Database clustering provides the ability for a database to be spread on more one machine but seem to the product as a single database. The database management system manages all the synchronization and load balancing of transactions automatically. It was designed to support the availability of the database in case of a hardware failure in one of the nodes of the cluster. Experience within the industry has shown that using the clustering capabilities can also improve performance when large amounts of data are involved. Logically clustering enables the database to access more power and spreading the workload across machines. This technique is applicable where the volume of the data is impacting database performance. One of the major symptoms is CPU usage on database is consistently high, no matter what tuning is performed at the database and product level. This implies that the database is CPU bound and while there may be an option to add more CPUs to the server, considering clustering the data becomes a viable alternative. While implementing clustering has been made progressively easier with each release of the database management system, implementing clustering must be planned using the guidelines outlined by the database vendor. Refer to the documentation provided on clustering by your database vendor.

Backup and Recovery


One of the most critical components of the implementation and ongoing support of product at a site is the ability to backup the data and software to ensure business continuity. Equally important is the ability to easily restore that data if the need arises.

51

Oracle Utilities Application Framework - Technical Best Practices

Typically a site will have a preferred regime and set of tools that is used to achieve a backup and recovery of all systems that the site. When implementing product this regime and set of tools is typically reused to cater for the products and business needs. When considering a backup regime for product the following should be considered: There is nothing within product technically that warrants a particular approach to Backup and Recovery. Most customers continue to use their existing approaches. There is nothing within product technically that warrants a particular backup and recovery tool. Most customers use the native tools provide with their platforms, for cost savings, but some customer have purchased additional infrastructure to take advantage of faster backups/recoveries or additional features provided by such tools.

If your site does not have a backup regime already the following can be considered default industry practice: Use Hot Incremental backups on production during the business week to reduce outage times. Do a FULL backup (Hot or Cold) once a week at least to reduce recovery times. Verify backups after they are taken to reduce risk of delayed recoveries. On non-production, consider either the same regime as production or consider regular FULL backups at peak periods in an implementation.

Writing Files Greater than 4GB


Note: This advice applies to products that use the COBOL support contained within the Oracle Utilities Application Framework. 64 Bit java based code automatically supports files greater than 4GB. Note: This change should not be attempted if the interface using the file is 32 bit as this only applies to 64 bit COBOL on a 64 Bit operating system. By default, any 64 bit COBOL based extract product process will create a file up to a 4GB limit. In the unlikely event that the extract process needs to create a file bigger than 4GB there is a way of instructing the COBOL runtime to support larger files. You must create a text based extension configuration file (say cmextfh.cfg) with the following contents:

[XFH-DEFAULT] FILEMAXSIZE=8 IDXFORMAT=8 You then place this configuration file in a location that can be referred to by the runtime. You can either deposit the file in $SPLEBASE/scripts (or %SPLEBASE%\scripts) or in a site specific central location. To enable support for larger formats your initialize the EXTFH environment variable with the location of the configuration file. For example:
set EXTFH=D:\oracle\TUGBU\scripts\cmextfh.cfg ( for Windows)

52

Oracle Utilities Application Framework - Technical Best Practices

export EXTFH=/oracle/TUGBU/scripts/cmextfh.cfg

(for Linux/UNIX)

This can be done in your .profile (for Linux/UNIX) or using the facilities outlined in Custom Environment Variables or JAR files. For additional details and additional parameters refer to My Oracle Support Doc Id: 817617.1.

Client Computer Best Practices


Even though product is browser based there are some practices on the client machine that affects performance. This section outlines the practices about the client machine that have proved beneficial.

Make sure the machine meets at least the minimum specification


As part of the installation documentation for each installation of product, the minimum and recommended hardware for the client is specified. Typically SPL takes the following into consideration when specifying this information: The minimum and recommended hardware as specified by Microsoft for the operating system used for the client. A typical set of other applications running on the machine, typically Office style applications.

While all care is taken in specifying the hardware will cost in mind, experience has shown that customers need to review the specification in light of their internal standards.

Internet Explorer Caching Settings


The Internet Explorer settings used must match the recommended settings as outlined in the product "Installation Guide", which includes: Internet Explorer cache settings should be set to Automatically NOT Every visit to EVERY page. Certain elements on the browser user interface pages are cached on the client for performance reasons. Incorrect setting of the cache settings in Internet Explorer will increase bandwidth usage significantly and degrade performance, as screen elements will be retrieved on each rather than from the cache. The correct setting is shown below:

Figure 12 Example Cache Setting

Java script must be enabled. The product framework uses javascript to implement the browser user interface.

53

Oracle Utilities Application Framework - Technical Best Practices

HTTP 1.1 supports must be enabled. If you use a proxy to get to the server, then also check "Use HTTP 1.1 through proxy connections".

Figure 13 HTTP 1.1 Settings

Clearing Internet Explorer Cache


Between upgrades, it is advisable to manually clear the Internet Explorer cache to remove any elements that may be still in the cache that are not applicable to the new version. This is a rare situation but sometimes clearing the cache can ensure corrections in caching or inappropriate elements "left over" from upgrades from being incorrectly displayed.

Optimal Network Card Settings


Typically the manufacturers of NIC devices provide a number of configuration settings to allow further optimization of network transmit and receive settings. Typically the defaults provided with the card are sufficient for the needs of the network traffic transmitted and received by the machine. It may be further optimal to investigate whether changing the settings can improve performance at your site (particularly the number of network buffers used). Altering the settings may improve performance but also may adversely affect performance (due to higher CPU usage). Typically the majority of customers use the default settings provided by the manufacturer.

Network Best Practices


The product ships data a network between the clients and the various components of the architecture. This section outlines some of the practices to optimize the network elements of a configuration.

Network bandwidth
One of the most common questions asked about the product is the network footprint of the Oracle Utilities Application Framework based product. This question is difficult to answer precisely for a number of reasons: The amount of data sent up and down the network is dependent on how much change is done by an individual user at the front end of the product. Only the elements changed by the end user are transmitted back to the server. The more the user changes the more the data is transmitted. Given the numerous possible permutations and combinations for data changes at any given time, this can be hard to estimate. The Oracle Utilities Application Framework supports partial object faulting. This means the framework only sends data to the client that is being displayed. In screen with more than one tab, the framework only sends the data for the tabs that are accessed by the end user. This

54

Oracle Utilities Application Framework - Technical Best Practices

means only part of the overall object required by the screen. Most users tend to operate on a small number of tabs but this can vary from transaction to transaction. All transmission between the client and server are compressed using HTTP 1.1 natively supported compression. This can reduce the actual size of the data transmission considerably depending on the content of the changes. Screen data is cached on the client machine that can be reused. The product takes advantage of the caching facilities in the HTTP 1.1 protocol and the browser caching functionality. For example, screen definitions and graphics are stored on the client machine to reduce network footprint. Upon every transmission of a screen element the data in the cache is tagged with an expiry date to indicate the life of the element in the cache. Use of client side caching can reduce the network traffic considerably with some customers reporting up to 90% reduction in network traffic when this caching is enabled.

To provide an estimate for the network footprint, the range between 10-200k, on average, per transaction is quoted to adequately cover all the aspects outlined above. This value has been based upon experiences with customers. It is possible to track network bandwidth using a log analyzer against the W3C standard access.log produced by your Web Application Server. Refer to the Performance Troubleshooting Guides for more information about this log.

Ensure legitimate Network Traffic


One of the major factors on performance is the amount of legitimate traffic on the network. The traffic to and from product shares the bandwidth with all other traffic on the network. If there is any network congestion than all transactions from all network-based applications will be adversely affected. Some customer sites have found that traffic that is not legitimate can adversely affect network performance. Traffic that is considered not legitimate includes: Traffic generated from viruses and Trojans There are a plentiful number of viruses and Trojans in the general Internet network that can cause bandwidth issues. Most sites have regular virus protection to minimize the impact to your network but not all. While it is not a requirement within product to have such protection, the industry in general recognizes the need for such protection. Unauthorized large transfers Large transfers of data can adversely affect performance as it can soak up bandwidth if the transfer is not configured correctly. There have been instances of large FTP transfers slowing down traffic on lower bandwidth networks.

Ensuring that only legitimate traffic is on a network can provide greater bandwidth for all applications (including product) and improve consistency.

Regularly check network latency

55

Oracle Utilities Application Framework - Technical Best Practices

In a network, latency, a synonym for delay, is an expression of how much time it takes for a packet of data to get from one designated point to another. In some usages, latency is measured by sending a packet that is returned to the sender and the round-trip time is considered the latency. The greatest impact on performance is inconsistency latency. The latency assumption seems to be that data should be transmitted instantly between one point and another (that is, with no delay at all). The contributors to network latency include: Propagation - This is simply the time it takes for a packet to travel between one place and another at the speed of light. Transmission - The medium itself (whether optical fiber, wireless, or some other) introduces some delay. The size of the packet introduces delay in a round trip since a larger packet will take longer to receive and return than a short one. Router and other processing - Each gateway node takes time to examine and possibly change the header in a packet (for example, changing the hop count in the time-to-live field). This is a common cause of network latency. Other computer and storage delays - Within networks at each end of the journey, a packet may be subject to storage and hard disk access delays at intermediate devices such as switches and bridges.

Minimizing latency or latest ensuring consistent latency is the goal of most of the product sites. A discussion of latency and how to measure it is contained in the whitepaper Performance Troubleshooting Guide

Web Application Server Best Practices


The Web Application Server is used by product to serve the pages to the client and contains a control data cache. There are a number of practices that sites find useful for maintaining the health of the Web Application Server.

Make sure that the access.log is being created


The access.log contains useful information that can be used for tracking bandwidth and usage patterns to make changes to configuration. One of the key log files for traffic analysis is the access.log. This is a log generated by every hit on the system from the end users. Every element of the screen is logged, asynchronously, including the time and userid. This log must be configured/enabled to be generated by the configuration. Refer to the Web Application servers documentation on how to enable this log to be generated. The log is generated in W3C common log format and can be analyzed by third party log analyzers for further analysis. A full description of the log, it usefulness and the log analyzers that can read the log are documented in the whitepaper Performance Troubleshooting Guide. Some customers use the log for various purposes:

56

Oracle Utilities Application Framework - Technical Best Practices

It is possible to track errors and trends from the log using the log analyzers. It is possible to parse the log at a low level and determine the number of concurrent users and the users that have used the system (and interestingly conversely who has NOT used the system). It is possible to track flows of individual sessions, known as click streaming, to track the screens and data used for the screens. It is possible to determine the criteria used by users for searches. This is useful for detecting wildcard searching.

This log is useful but it is large so needs to be managed as suggested in Backup of Logs.

Examine Memory Footprint


One of the common experiences for ALL the J2EE Web Application Servers that product runs upon is that there seems to be a Java Virtual Machine (JVM) limit on exactly how many concurrent online users a server will support. Typically the experience has been that between 150-200 concurrent users are served by each instance of a JVM. There are a number of techniques that are available to maximize this: Increasing the java memory parameters used for the JVM This can be a configuration setting change (WebSphere, Oracle AS) or a script change (WebLogic, Tomcat). Typically customers change the default settings to either 512MB or 1GB per JVM. Note: In Oracle Utilities Application Framework V4 and above the JVM Options can be configured using parameters. Refer to the Server Administration Guide provided with your product for more details. Creating additional servers within the instance to cater for the load.

Customers implement the latter suggestion in the following ways: Oracle WebLogic A server entry for each new server is setup in the same WebLogic instance. The port number can be the same (if the server is housed on a separate machine, known as clustering) or a different port number (i.e. managed servers). A proxy is required to have a common connection point and to implement load balancing. The memory footprint will be the same size for each server. IBM WebSphere A new server is created within the WebSphere instance. The port number can be the same (if the server is housed on a separate machine, known as clustering) or a different port number (i.e. managed servers). A proxy is required to have a common connection point and to implement load balancing. The memory footprint can be different for each server as that is held against the server entry within WebSphere.

Refer to Production Environment Configuration Guidelines for more guidelines for production systems for JVM memory settings.

57

Oracle Utilities Application Framework - Technical Best Practices

Optimize Garbage Collection


Tests with customers and benchmarks revealed that while taking the default settings for the Java garbage collection is usually sufficient, in some circumstances tuning the garbage collection process could yield performance benefits. Automatic garbage collection is a feature of Java whereby, at a certain memory tolerance, the JVM will examine objects in memory and clear any unused objects from memory to prevent out of memory conditions happening. The unused objects are considered garbage and hence are collected and removed from memory. The issue with garbage collection is at the time of the actual garbage collection the JVM freezes to perform the task. This can cause performance issues if garbage collection is performed more frequently than expected. Optimization of the garbage collection is a balance between the frequency of the garbage collection and amount of traffic on the server at the time of the collection. The amount of memory to collect will depend on the traffic at the time Ideally, Sun recommends that garbage collection should occur every few minutes at most, more frequently the more impact on performance. Customers aim for garbage collection every 5 minutes at minimum during normal times. A study on the garbage collection tolerance and their affect is documented at http://java.sun.com/docs/hotspot/gc/. Refer to Production Environment Configuration Guidelines for more guidelines for production systems for garbage collection settings.

Turn off Debug


One of the development features of product is the ability to output useful debugging information as part of the running of the application. While this information is useful for development environments it is not useful for production or other performance sensitive environments. Most customers change the debug setting to false to disable global debug information. It is possible to debug individual transactions using the interactive debug facility. Note: This requires the Application Descriptors for all applications to be updated.

Load balancers
Oracle Utilities product customers who have more than one Application Server (physical or logical) must use a load balancer to route the traffic evenly across the available servers. This load balancer can be either software based (such as a web server with the appropriate plugin from the Application Server vendor) or a hardware based load balancer (such as BigIp or other Layer 7 switches) . Experience has shown that customers with a large number of users (typically greater than 1500) tend to use hardware load balancers and smaller customers use software based load balancers. Using load balancers with product may not guarantee that load is evenly distributed, as the transactions do not have a consistent resource load factor. The resource load factor for any product depends on the transaction type and the data used in that transaction. For example, Search transactions are different

58

Oracle Utilities Application Framework - Technical Best Practices

from maintenance transactions and resource usage of any search is dependent on the criteria used. Two executions of the same search will have different response and resource usage profiles. Factored on top of that is the fact that the load on a server is a summation of the all the transactions sent to it and that transactions vary from second to second, minute to minute, hour to hour etc. The best you can do is When installing a load balancer there are a number of algorithms for load balancing offered:
TABLE 11 EXAMPLE LOAD BALANCING ALGORITHMS ALGORITHMS PROCESSING COMMENTS

Round Robin

Traffic is routed to each server on a rotating basis.

This is the most common used by product customers.

Random

Traffic is routed randomly to the servers.

Not commonly used but may be used if traffic is random enough.

Weighted Round-Robin Allocation IP Address

Variation on Round Robin but allows support for clusters where all servers are not the same size. Traffic is routed using client IP address as the identifier where servers are assigned IP address ranges.

Not generally used by product customers. Has been used by customers but found that has limitations if used with virtual servers such as Terminal services or Citrix.

Load

Load factors of transactions are measured and used to determine which server is best suited.

Not used with product, as most load factors are inconsistent across transaction invocations.

Typically most customers use Round Robin as it is simple and given load is unpredictable can yield the best results. Most customers understand that on some periods of time the load will not be balanced but on average the load is relatively balanced. Remember that each transaction time is a function of how much data is changed If using load balancing the following additional advice is applicable: Ensure that the load balancer does not interfere with Internet Explorer caching. This may result in a low cache hit rate and increase bandwidth used. Ensure that the load balancer supports HTTP 1.1 headers to support compression. Ensure that the balancer supports Passive and Active Cookie persistence for session cookies. The Web Application Server uses session cookies, for passing security credentials between the client and the server. The load balancer must not compromise this facility. Ensure that the load balancer supports SSL persistence, if SSL is used, to ensure that encryption and decryption are not compromised.

Preload or Not?
One of the startup features of product V1.5, and above, is the preloading of pages to save time. This preloading process dynamically rebuilds the screen definitions from the XML meta data on startup. While this setting (by default) enables the startup to pre-build them (instead of on first invocation) the

59

Oracle Utilities Application Framework - Technical Best Practices

startup of the Web Application Server is delayed while the preload process is executing. The startup of the server is delayed until the last of the screens is preloaded. While the preloading of individual screens is very quick (measured in milliseconds) building all screens (1000+) can cause significant delays to initial availability AFTER a restart. It is possible to influence the amount of preloading using two parameters in the Web Application Descriptor called: preloadAllPages This parameter affects how much preloading is taking place if it is preloaded. A value of true preloads every screen for product. A value of false preloads screens off the Main menu only (the screens the end users will be using). disablePreload This parameter controls whether preload is performed or not at all. This parameter affectively overrides the preloadAllPages parameter.

The affect of changing the parameters is outlined in the following table:


TABLE 12 PRELOADING PARAMETER COMBINATIONS PRELOADALLPAGES DISABLEPRELOAD EFFECT

true

true

Pages are not preloaded at all. First invocation of the screen by the first user in that screen loads the screen for all users. Can cause slight delay in initial screen load for a single user but application startup is quicker

true

false

All pages are preloaded including administration and utilities menu. This setting is not recommended for production as it delays Web Application Server startup unnecessarily.

false

True

Pages are not preloaded at all. First invocation of the screen by the first user in that screen loads the screen for all users. Can cause slight delay in initial screen load for a single initial user but application startup is quicker.

false

false

Default. Pages on the Main menu are preloaded. This delays the startup of each managed server but ensures screens are loaded quicker for ALL users.

Changing of this parameter affects availability rather than performance but should be considered if availability is critical or you are not using all the screens in product. It is recommended that the following settings be implemented if you do not use the entire product or you want startup to be quicker: preloadAllPages disablePreload false true

Note: This requires the Application Descriptors for all applications to be updated.

Native or Product provided utilities?


The Oracle Utilities Application Framework provides a set of basic utilities to manage (i.e. start and stop) the product. While they are operational, they are not mandatory to use and some sites prefer to use the native utilities provided by the Web Application Server vendors to start the product.

60

Oracle Utilities Application Framework - Technical Best Practices

The reason that sites use the native utilities is that operations staff are more familiar with the native utilities, offer more options and typically have an number of interfaces (not just command line). The Oracle Utilities Application Framework provided utilities utilize the native utilities but use a subset of options only. If the native utilities are used then the spl[.sh] utility should only be used to start and stop nonWeb Application Server components.

Hardware or software proxy


You will need to proxy connections if you use clustering or a number of managed servers. The choice of software or hardware proxy is site specific. Large customers prefer hardware proxies and smaller ones software proxies. If the implementation uses multiple servers then a proxy is needed to group the servers into a cluster or managed configuration for load balancing purposes. There are two alternatives for such a proxy: Software Each of the Web Application Server's supported by product provide a plugin to use a HTTP server such as Apache, Oracle HTTP server, IBM HTTP Server, Netscape or IIS as a proxy. Typically the plugin is installed within the HTTP server and configured to define the server address and scheme of load balancing. Hardware Increasingly the network router manufacturers are making hardware products that act as network proxies or load balancers (known as Layer 7 load balancers). Hardware such as BigIp, WebSwitch, NetScaler etc are increasingly performing load balancing within intelligent hardware. In this case, you simply configure the servers and ports to a virtual address in the hardware and the load balancing scheme to use.

Customers with multiple servers are either using a hardware or software proxy. The larger scale customers favoring hardware based solutions. The only thing to remember with a proxy is to make sure the following are taken into account: The proxy server must support the IE caching scheme and not disable it or adversely affect its operation. This will increase network through put. The proxy server must support session cookies. It must be configured to support the passing and processing of session cookies as they are used for security tokens in product. Failure of this point will result in the security dialog being displayed before EVERY screen.

What is the number of Web Application instances do I need?


One of the most common questions for an implementation is "how many Web Application Servers do I need to support the number of users that we have planned to be attached to the product for production?" The answer to this depends on the JVM you are using and its limitations. Tests and experience has shown that the Java Virtual Machine has an internal limitation on the number of threads that can be safely supported for transactions. This is not a sever limitation but represents the number of active transactions (i.e. Users) that are supported on a Web Application Server at any time.

61

Oracle Utilities Application Framework - Technical Best Practices

Tests have shown that this number varies between 300 500 users on a single Web Application Server JVM instance. The number varies according to the JVM version used and the vendor that supplies the JVM. This number represents maximum number of simultaneous active users hitting the Web Application Server at peak time. The easiest method for finally determining the number of instances this will become is to divide the number of users expected on the system, at worst case, by 300 and then round up to the next integer. For example, to support 750 users then you can specify 3 instances, to support 500 then you specify 2 instances etc. This method assumes worst case. Regular monitoring of the actual number of connections will reveal whether this needs to be altered.

Configuring the Client Thread Pool Size


One of the first settings that will need to be configured for the product is the Client Pool Size on the Web Application Server. The thread pool manages the number of active connections to the Web Server (see diagram below). A pool is used as it saves resources by allowing reuse of connection threads instead of constantly creating and destroying threads.

Client Connection Pool

Web Application Server

Figure 14 Client Connection Pooling

Each Web Server calls it a different name: Oracle WebLogic Server - Default Execute Queue/Threads IBM WebSphere Server - Thread Pool

Note: For newer versions of Oracle WebLogic the thread pool is automatically managed by the Web Application Server itself so the settings explained in this section may not apply. If you choose to manually manage the connections in Oracle WebLogic then the advice does apply.

62

Oracle Utilities Application Framework - Technical Best Practices

For purposes of this article we will call it thread pool. The number of connections allocated in the pool is not the same as the number of users logged on. As product is a stateless application the thread pool represents the number of users actually hitting the web server, not idle users. Idle users in a stateless application consume little or no resources (actually the only resource an inactive user holds is an open socket to the web server). Therefore the size of the thread pool at any time is the number of ACTIVE users using the product. For the product, the number of users for the Web Server is dictated by this formula: Number of Active "Users" = Number of Active Users in the product + Number of Active Threads in XAI + Number of threads in MPL. Note: Not all Oracle Utilities Application Framework based products use the MPL. XAI and MPL threads should be treated as users as well. This is because they typically share the same thread pool.
Browser Client
HTTP/S

OUAF Based Product XML Based Application


HTTP/S Connection Pool

root Objects XAIApp

HTTP/S

MPL/Fusion

JMS

File

Database

Web Server

Email Server

Figure 15 Shared connection pooling

Thread pools are not static in size, they can grow and shrink in size depending on the traffic volumes experienced. For product, thread pools have three attributes that need to be considered for sizing: Minimum Size - This is the size of the thread pool at Web Application Server startup time and the absolute minimum if the pool is shrunk due to inactivity. For product, this typically represents the "typical load" on the Web Application Server. In other words, the "typical number of active users", on the system at any time. Most customers either use the typical load for the day period or the typical load for after business hours. The latter is used where sites

63

Oracle Utilities Application Framework - Technical Best Practices

want to minimize the resource usage as the pool is directly related to the amount of memory used by the Web Application Server. The higher the minimum, the higher the memory usage for the server (even at rest). Maximum Size - This is the maximum size the thread pool can grow to within the Web Application Server responding to the peak load of the traffic. For the product this typically represents the peak load expected on the largest amount of traffic expected at any point in time. You know "those days". If the maximum is set too low for the load then end users will experience delays even getting a connection to the Web Application Server. Again the value here is also tied to the memory usage. The higher the value, the higher the memory footprint at peak. Inactivity Tolerance - This value (usually in seconds) is the amount of time that a thread is not allocated to a user before it is destroyed. This value is to reduce the pool size when it has grown about the minimum to detect when there is a drop in traffic. Each Web Application Server will have a default and even a different name for it. Typically customers leave the default but it is worth noting to see if it needs changing in the future.

How do you work out the pool sizes? The product does not have a specific recommendation as it varies according to the volume of transactions but the following has been observed at customer sites: For the minimum pool size, set the tolerance to the minimum number of active users for your site. This may be able to deduced from testing but be aware that each transaction has different durations depending on the transaction type (Maintenance, List and Search) and the actual data used in the transaction. Experience has shown us that if you divide the number of defined users by three (3) then it may be a good rule of thumb. Several product customers have noticed that only about a third of their users are active at any time. It should be pointed out that this rule of thumb may not apply to your site but at least it may be used a guide. As for the maximum, the only advice that is applicable is that the value should NOT equal the number of users you have defined to the system. The value will vary according to the expected peak traffic experienced at the site. Customers have used between 33-70% of the number of defined users as the setting for the maximum pool size. To determine the optimum value for your site, it may be necessary to use trial and error.

Note: Setting the minimum and maximum to higher than normal values may waste memory resources on the Web Application Server and may cause performance degradation. Once you have set the settings in your configuration you will need to monitor it to see whether you need to adjust the minimums and maximums. Customers have determined their own rules of thumb and get to the sweet spot after a few weeks or months of testing or production.

Defining external LDAP to the Web Application Server

64

Oracle Utilities Application Framework - Technical Best Practices

Note: A detailed discussion of LDAP integration is available in the Oracle Utilities Application Framework LDAP Integration whitepaper. Lightweight Directory Authentication Protocol (LDAP) is promoted as a means to leverage an organizational directory as a principal registry for product user authentication. Therefore as part of the security setup of product you may need to integrate to an onsite LDAP security repository. This is supported directly by the Web Application server software and product does not require additional configuration. Each of the Web Application Server vendors has specific instructions for integrating LDAP but the same process is followed: Determine LDAP Query - The LDAP query to find the users is required to be determined. Even though LDAP is a standard protocol determined by the IETF the repository structure itself will vary from vendor to vendor and even the same vendors repository structure will vary from customer to customer as it can be altered to suit the business model. This is the hardest part of the process, as the query needs to be correct else it will not return the right records. It is akin to submitting the wrong SQL statement. There are tools, like ADFIND (for Microsoft ADS for example). to help you with this process. Define LDAP settings to Web Application Server - Input the query and credentials to access the LDAP repository. This will vary between Web Application servers but basically you need to define the following: The location (host) of the LDAP server(s) The port numbers for the LDAP server(s) (usually 389) The credentials used to read the LDAP server(s) (userid/password) The LDAP query to get the users (and sometimes groups for some Web Application Servers). (Optional) Cache settings to save data retrieved from the LDAP server for performance reasons. Note: Ensure that the LDAP you have specified contains a definition of the administration account you use to start/stop/administrate product, else if you have made a mistake it may not be possible to restart the Web Application Server. To reduce the risk of this happening, some sites define two repositories, one to the LDAP server and one to the default security repository provided by the Web Application Server vendor as a precaution. The latter is used to house the administration accounts you do not want to store in the company LDAP. Restart to reflect changes - Restart the Web Application Server.

For more information see the following sites for your Web Application Server:

65

Oracle Utilities Application Framework - Technical Best Practices

Oracle WebLogic http://download.oracle.com/docs/cd/E12840_01/wls/docs103/secmanage/atn.html#wp11 98953 IBM WebSphere - http://websphere.sys-con.com/read/43210.htm or http://publib.boulder.ibm.com/infocenter/wasinfo/v6r1/topic/com.ibm.websphere.base.do c/info/aes/ae/tsec_ldap.html

Synchronizing LDAP for security


After integrating the LDAP to the authentication components of the Web Application Server some customers then want the ability to import the LDAP information into the security model. The LDAP import process uses a special XAI service (LDAP Import) that reads the information from the LDAP store and creates the appropriate security entries by calling standard XAI services to maintain users and groups. The entire import process may be more appropriately called synchronize because groups, users, and the connections between them are synchronized between the LDAP store and the security model used by product. Customers use the LDAP import when they want to synchronize large numbers of user changes from the LDAP to product or want a one step process. Without the LDAP import, every time a user is added to the LDAP server to give that user access to the product, that user must be manually enter into the authorization model to define what access they have within the product. If you have a lot of users this can be perceived as an unacceptable overhead. With the import function, you add the user once in the LDAP and import them into the authorization model on a regular basis. The process for configuring the LDAP import can seem complication but is logical. The following process is used to configure the LDAP Import function: LDAP Query - The LDAP query to return the users, groups and group membership is determined using the LDAP vendor or third parties provide. As pointed out previously, each vendor and each site has a different structure to implemented. The LDAP vendor sometimes provides utilities to assist in this. There are third party tools such as adfind and LDAP Explorer that can be of assistance. Mapping - A mapping must be provided between the LDAP repository attributes and the User and User Group objects in product. The product security authorization model stores additional attributes particular to the product that are usually not stored in the LDAP repository. The following process can be used to assist in the mapping process: Generate an XAI schema for the User and User Group objects using the XAI Schema Editor. This will give you the attribute/tag names to use in the mapping. Use a tool to query the LDAP repository to determine what values and fields have been stored in the repository. This will vary from implementation to implementation.

66

Oracle Utilities Application Framework - Technical Best Practices

Look at both inputs and find the fields from the LDAP that you can map to the product schema. The mapping uses the cdxName attribute for product fields and ldapAttr for LDAP field name. For example: <LDAPCDXAttrMapping cdxName="Firstname" ldapAttr="cn"> The above entry maps product field "First Name" to the "cn" field in the LDAP. So when the import is performed it will use the "cn" value in "First Name".

Any fields in product required but not in the LDAP will use the "default" tag instead of the ldapAttr tag. Repeat the above tasks for "User Group" and "membership" of the groups.

Store the mapping file - Create a mapping file in a location readable by the product administration account. The format of the file is documented in the Importing Users And Groups online documentation. Include the mapping file - Include the mapping file in the XMLParameterInfo.xml configuration file as documented in the Importing Users And Groups documentation. Define the LDAP server location - Configure the LDAP Server location and port number in the XAI JNDI Server dialog to define the location. Run - Initiate the import as documented in the Importing Users And Groups online documentation.

The LDAP import service calls the LDAP Import Adapter, which performs the following: It reads the configuration information provided as XAI parameters to the request. Parameters include the Java Name Directory Interface (JNDI) server, user and password for the LDAP server, and the transaction type (i.e., import). It connects to the LDAP store using a JNDI specification. For each element (user or group) in the request, the LDAP is searched by applying the search filter (specified for the element) and the searchParm (specified in the request). The adapter goes through each entry found in the search and verifies whether or not there is already an entry in the system and whether a user belongs to a user group. From this information, it automatically determines the action to be taken: Add Update Link user to group Unlink user from group (by setting the expiration date)

67

Oracle Utilities Application Framework - Technical Best Practices

If the entry is a group, the adapter also imports all the users in LDAP that are linked to the group. If the entry is a user, the adapter imports the groups to which the user belongs in LDAP. For each imported entity, the adapter creates an appropriate XML request and adds it to the XAI upload staging table. For example if the action is to add a user, it creates an XML request corresponding to the CDxXAIUserMaintenance service; and if the action is to add a group, it creates an XML request corresponding to the CDxXAIUserGroupMaintenance service. The XML upload staging receiver processes the upload records in sequential order (based on the upload staging ID). The MPL is used to complete the processing. Note: If a user is imported because it belongs to an imported group, the adapter does not import all the other groups to which the user belongs. If a group is imported because the imported user belongs to it, the adapter does not import all the other users that belong to the group. Note: Users and groups whose names exceed the length limit in the system are not synchronized.

Appropriate use of AppViewer


The AppViewer is a component of product that displays meta-data in a more useable format. In past versions of the product, it was preloaded with every product environment. Typically the information is used for design and development work only. To make the best use of the AppViewer the following advice is offered: The AppViewer is provided blank intentionally. It must be primed using a predefined set of Batch jobs. This will take data from the meta-data (including ANY customizations) and generate it. You will need to run the jobs regularly if you update the meta-data regularly and want the information reflected in the Application Viewer.
TABLE 13 APPVIEWER BATCH CONTROLS BATCH CONTROL USAGE

F1-AVALG

Generate AppViewer XML file(s) for Algorithm data (includes javadocs). This is code generation as well.

F1-AVBT F1-AVMO F1-AVTBL F1-AVTD

Generate AppViewer XML file(s) for Batch Control. This is useful for run book information. Generate AppViewer XML file(s) for Maintenance Object data Generate AppViewer XML file(s) for Table/Field data Generate AppViewer XML file(s) for To Do Type

The introduction of the batch jobs, means you can decide which information is important for your site to display in the AppViewer. For example, if you wish not to have To Do Types documented then you can omit that information by not running that job. If you wish to populate ALL the information then you can use the genappvieweritems command (or genappvieweritems.sh for UNIX).

68

Oracle Utilities Application Framework - Technical Best Practices

Consider only populating the information in any design and development environments to save disk space. The AppViewer can extend to a number of gigabytes if fully loaded.

Fine Grained JVM Options


Note: This facility is available in Oracle Utilities Application Framework V4 and above only. The utilities provided with the Oracle Utilities Application Framework invoke a Java command line for the Web Application Server, Business Application Server and batch components of the architecture. Whilst the memory arguments and java options are standardized in the utilities, some sites have found that changing the defaults provided allows for improvements in performance and stability. In the past releases of the Oracle Utilities Application Framework prior to V4 this meant manually changing the scripts provided as utilities, with the product, which can be overridden in upgrades. In Oracle Utilities Application Framework V4 and above it is possible to set the memory requirements and additional JVM options from configuration parameters. The following table lists the settings that can be altered using the configureEnv utility:
TABLE 14 JVM MEMORY AND OPTIONS CONFIGURATION SETTINGS CONFIGURATION SETTING COMPONENT USAGE

ANT_ADDITIONAL_OPT ANT_OPT_MAX ANT_OPT_MIN BATCH_MEMORY_ADDITIONAL_OPT BATCH_MEMORY_OPT_MAX BATCH_MEMORY_OPT_MAXPERMSIZE

ANT ANT ANT Batch Batch Batch

Additional java options for the ANT make tool. Maximum memory size for ANT make tool. Minimum memory size for ANT make tool. Additional java options for Batch Threadpool workers. Maximum memory for Batch Threadpool workers. Maximum permanent generation size for Batch Threadpool workers.

BATCH_MEMORY_OPT_MIN WEB_ADDITIONAL_OPT WEB_MEMORY_OPT_MAX WEB_MEMORY_OPT_MAXPERMSIZE

Batch Web/Business Web/Business Web/Business

Minimum memory for Batch Threadpool workers. Additional java options for J2EE Web Application Server. Maximum memory for J2EE Web Application Server. Maximum permanent generation size for J2EE Web Application Server.

WEB_MEMORY_OPT_MIN

Web/Business

Minimum memory for J2EE Web Application Server.

The values for these settings will vary according to your site needs and the JVM vendor used at your site. The following guidelines should be considered when changing these values: The additional java options supported by each JVM vendor is slightly different to take advantage of specific platform requirements by the JVM. Refer to the JVM options documentation provided with your JVM. For Oracle/Sun based JVM's refer to http://java.sun.com/javase/technologies/hotspot/vmoptions.jsp Ensure any options specified are within the constraints and restrictions of the JVM. For example, setting invalid values may result in failure or unexpected behavior.

69

Oracle Utilities Application Framework - Technical Best Practices

Do not specify the Xms, -Xmx or XX:PermSize parameters as additional options as these are have dedicated settings already. The following common settings have been used by customers:
TABLE 15 COMMON JAVA ADDITIONAL OPERATIONS CONFIGURATION SETTING USAGE

-XX:+UseParallelGC -XX:+MaxFDLimit -XX:+UseGCOverheadLimit

Use Parallel Garbage Collection Bump the number of file descriptors to maximum (Solaris Only) Use a policy that limits the proportion of the VM's time that is spent in Garbage Collection before an OutOfMemory error is thrown.

-XX:+UseLargePages

Use large page memory. See Large Memory Pages for more details.

-XX:-HeapDumpOnOutOfMemoryError

Dump heap to file when java.lang.OutOfMemoryError is thrown. Commonly used by Oracle Support if necessary.

-XX:HeapDumpPath=<path and name>

Location and name of dumpfile. Commonly used by Oracle Support if necessary.

-XX:+PrintGC

Print message when garbage collection occurs

Note: The Production Environment Configuration Guidelines whitepaper contains advice for settings for all versions of the Oracle Utilities Application Framework based products.

Customizing the server context


Note: This facility is available in Oracle Utilities Application Framework V4 and above only. In past versions of the Oracle Utilities Application Framework the URL used by the product was fixed on certain platforms. The URL included the context spl or splapp depending on the platform and the J2EE Web Application server used. In Oracle Utilities Application Framework V4 and above, it is now possible to specify a custom context as part of the installation process. This allows the following URL to be used: http://<host>:<port>/<server>/cis.jsp as the default URL with the following settings:

<host> <port>

The hostname for the Web Application Server. The port number allocated to WL_PORT at installation time. To avoid the port number a value of 80 may be specified. This value can only be specified once per Web Application Server machine. The server context that can be set using WEB_CONTEXT_ROOT at installation time. This value must be valid for the J2EE Web Application Server and is restricted to a single value text value without any embedded blanks or special

<server>

70

Oracle Utilities Application Framework - Technical Best Practices

characters (such as the directory character).

Clustering or Managed?
One of the decisions that must be made, when dealing with multiple web application servers, is to whether the servers will be clustered or managed. The attributes of each style are outlined below: Clustered A cluster is a group of servers running a Web application server simultaneously, appearing to the users as if it were a single server (usually managed by a separate administration server). The advantages of using a cluster are that you can manage the servers as a group and also the servers communicate to each other to monitor availability. Clusters can load balance within themselves as they are in constant communication with each other. The disadvantages are that there is an overhead in communication (usually each server uses multicast to communicate to the other servers in a cluster) and each server must use a different IP address and port number. This means clusters can only operate on one machine per server. The figure below summarizes a cluster:

Load balancer

Web Application Server Cluster

Administration Server

Figure 16 Example clustered server architecture

Managed A managed set of Web Application servers that are independent of each other. They can be housed on a single machine or multiple machines and can be housed on machines of differing size. The advantage of managed servers is that each server can be targeted for specific user groups and can be managed independently. There is no additional communication between the servers. A separate administration server can manage the servers but that role can be taken by one of the managed servers if desired. The disadvantages are that the load balancing software/hardware housed between the users and the managed servers performs the load balancing and that deployment must be performed individually. The figure below summarizes managed servers:

71

Oracle Utilities Application Framework - Technical Best Practices

Load balancer

Administration Server

Managed Web Application Servers

Figure 17 Example managed server architecture

There are no clear winners between clustering and managed Web Application Servers as the main factors in the decision are: Amount of hardware Clustering requires a hardware server per server . Sites where a small number of servers are deployed cannot use clustering. Maintenance Effort Clustering can reduce maintenance overhead if there are a large number of servers involved. Managed servers require individual maintenance. Tolerance for multi-casting Some sites ban multi-casting as it constitutes can be perceived as an unacceptable overhead on the network. Deploying a private network between the servers can minimize this, though this is more expensive. Flexibility Many sites use managed due to its flexibility in routing particular traffic to particular servers. For example, setting up specific servers for non-call center traffic (e.g. XAI, interfaces, depots).

Whether your site uses clustering or managed servers does not factor into high availability solutions as customers have deployed high availability solutions using either technique.
Clustering and Environmental configuration settings

The configuration files used by the Oracle Utilities Application Framework specify a number of environmental focused settings (e.g. hostnames, ports, file paths etc). These are used by the runtime of the Oracle Utilities Application Framework to orientate to the correct environment. Given these environment settings are embedded in the configuration files, there may be an impact on sites using clustering. To support clustering with embedded environmental settings the following guidelines are recommended: Apply Patches - For Oracle Utilities Application Framework V2.2 sites, it is recommended to install Single Fix 8218568 to externalize some of the configuration files outside of the product. At the time of writing, this patch was only available for Oracle Utilities Application

72

Oracle Utilities Application Framework - Technical Best Practices

Framework V2.2. Check "My Oracle Support" for the latest for other versions of Oracle Utilities Application Framework. Sites using Framework V4 and above do not need to apply any additional patches at the time of writing. Host Name settings In a clustered environment the hostname used for any configuration setting should be the cluster host or the load balancing proxy used for the cluster. To access a cluster, the users (or servers) need to access a single URL; the host component of that URL should be used for any host name configuration settings. Custom Context In Oracle Utilities Application Framework V4 and above, it is possible to support a custom URL context for use with the product at installation time. In a clustered environment, the context should be common and therefore the setting of this value should be the same across all nodes of a cluster. Port Numbers As part of the URL used for the product, a port number can be explicitly used. In most sites, Port 80 is used for production as it does not need to be specified on the URL by users. In a clustered environment this port should be common and therefore the setting of this value should be the same across all nodes of a cluster. Most J2EE Application Server vendors insist that all nodes of a cluster have the same port number (but different hostnames). File Locations - The product requires some knowledge of where environmental specific information is stored. This information is then configured to inform the product where specific configuration files and important directories are located. Installing the software in a common location or on the same location on each node can help allow the file locations to support clustering.

Note: There are environmental configuration settings in the J2EE Web Application Descriptor (web.xml) and XAI Options screen as well as configuration files covered by Single Fix 8218568.

Allocate port numbers appropriately


When installing a copy of product you need to allocate a number of port numbers for each environment. It is recommended to allocate a previously unused range of ports per environment so avoid port conflicts. The following table outlines all the port numbers required by product at installation time:
TABLE 16 PORT ENVIRONMENT SETTINGS PORT P/I COMMENTS

BATCH_RMI_PORT BSN_JMX_RMI_PORT_PERFORMANCE BSN_OASREQPORT BSN_OASRMIPORT BSN_OC4JORMIPORT BSN_RMIPORT

I I P P P I

Default JMX Port for monitoring Batch threadpool Default JMX port used for Business App Server Monitoring Oracle Application Server Request Port for Business App OC4J Instance RMI Port for Business App OC4J Standalone ORMI Port JVM Child process starting Port Number (COBOL products

73

Oracle Utilities Application Framework - Technical Best Practices

only) BSN_WASBOOTSTRAPPORT DBPORT MPLADMINPORT OSB_PORT_NUMBER SOA_PORT_NUMBER WEB_JMX_RMI_PORT_PERFORMANCE WEB_OASREQPORT WEB_TCATSHUTPORT WEB_WLPORT WEB_WLSSLPORT P P I I I I P I P P Bootstrap port (WebSphere) Database Connection Port MPL Administration Port (if MPL available) Port allocated to Oracle Service Bus interface (if available) Port allocated to Oracle SOA (if available) Default JMX port used for Web App Server Monitoring Oracle Application Server Request Port for Web App Tomcat Shutdown Port Web Server Port Web Server Port using SSL (WebLogic)

Legend: P Port allocated prior to installation of product, I Port allocated during installation of product. Prior to installation of product, the database and Web Application Server need to be installed and the ports allocated to these components recorded and provided for the installation of the product (they are indicated with a "P" in the table). Each vendor will have the port definitions stored in different places. Refer to the vendor documentation for more information. When allocating ports (indicated with an "I" in the table) during the installation the following advice may be useful: Pick the same port numbering scheme per environment to save time allocating ports. Some sites find using the same last digits for the type of port is helpful. For example, having 4 allocated for BSN_RMIPORT (6504, 7914, 9724, 22034 etc). BSN_RMIPORT denotes starting ports. The number indicates the start of the port range. The JVMCOUNT determine the ports allocated. Ensure that there are free ports in the range starting from that port number. Note: BSN_RMIPORT and JVMCOUNT only applies to products using COBOL support. Document the ports used in your documentation or services file for future reference. Do not allocated used ports as there will be port conflicts and potentially the applications will refuse to work.

Monitoring and Managing the Web Application Server using JMX


Note: This facility is available in Oracle Utilities Application Framework V4 and above only. In Oracle Utilities Application Framework Version 4.0 it is possible to enable JMX performance statistics to allow collection, management and monitoring of JVM information for the Web Application Server. For backward compatibility, the JMX enabled facilities are disabled by default. To use this facility you must execute the configureEnv utility with the a option (Advanced Menu) and specify the following settings:

74

Oracle Utilities Application Framework - Technical Best Practices

TABLE 17 JMX SETTINGS FOR WEB APPLICATION SERVER SETTING CONTENTS

JMX Enablement System Userid JMX Enablement System Password RMI Port for JMX Web

Userid used for logging onto JMX Mbeans Password to be used for JMX Enablement System Userid Port number to allocate to the JMX for the Web Application Server

This information is added to the spl.properties file in the etc/conf/root/WEBINF/classes subdirectory for the environment, for the Web Application Server. An example of the applicable settings is shown below:

spl.runtime.management.rmi.port=.. spl.runtime.management.connector.url.default=service:jmx:rmi: ///jndi/rmi://hostname:../oracle/ouaf/webAppConnector jmx.remote.x.password.file=scripts/ouaf.jmx.password.file jmx.remote.x.access.file=scripts/ouaf.jmx.access.file ouaf.jmx.com.splwg.base.support.management.mbean.JVMInfo=enab led ouaf.jmx.com.splwg.base.web.mbeans.FlushBean=enabled The following settings are important to the JMX monitor:
The spl.runtime.management.connector.url.default is the JMX url to be used in the JMX console or JMX browser. The jmx.remote.x.password.file and jmx.remote.x.access.file are the default security setup for the JMX. These are for basic security setup. For more information about the files and alternative security setups refer to http://java.sun.com/javase/6/docs/technotes/guides/management/agent.html. The ouaf.jmx.* settings enable individual beans at startup time. These may be enabled at runtime.

Once the Web Application Server component is started; the JMX Mbeans defined in this configuration are started and a JSR160 compliant JMX console or JMX browser can be used to connect to the JMX Mbeans. The remote URL and credentials are provided as configured above. Within the JMX console or JMX browser there are a number of specific facilities that are available: It is possible to manage the data within the Web Application Server cache from JMX. In past releases of Oracle Utilities Application Framework this was possible using utility URLS's which required the IT group to logon to the product to issue commands. This is still possible but can be replaced with JMX console commands. This is controlled by the FlushBean Mbean. It is possible to get environmental information about the Web Application Server Java Virtual Machine (JVM) for support purposes. . In past releases of Oracle Utilities Application Framework this was possible using utility URLS's which required the IT group to logon to the product to issue commands. This is still possible but can be replaced with JMX console commands. This is controlled by the JVMInfo Mbean.

75

Oracle Utilities Application Framework - Technical Best Practices

It is possible to get internal JVM information about the Web Application Server using the JVMSystem Mbean. This is an extension of the base Java MXBeans (http://java.sun.com/javase/6/docs/api/java/lang/management/package-summary.html). By default these are disabled and can be seen by executing the enableJVMSystemBeans operation from the BaseMasterBean. When enabled the following additional areas can be monitored via JMX for the Web Application Server: Class Loading statistics Memory statistics Operating System statistics (statistics vary by platform). JVM Runtime information (additional to JVMInfo) Thread statistics Statistics on individual java threads.

Note: No confirmation (i.e. Are You Sure?) dialog is provided with most JMX consoles or JMX browser so care should be taken when issuing commands.

Enabling autodeployment for Oracle WebLogic console


Note: The technique shown below applies to Oracle Utilities Application Framework V4.1 and above. For other versions of the Oracle Utilities Application Framework custom templates or manual changes are necessary from the Oracle WebLogic console. Refer to the Configuration And Operations Guides for those products for more information. By default, Oracle WebLogic is deployed on demand, on first use, when using the default templates supplied by the product. This behavior can be altered to autodeploy the console at startup to save the initial delays when first using the console. To autodeploy the console on startup add the following to the %SPLEBASE%\templates\CM_config.xml.win.exit_3.include user exit file (for Windows) or $SPLEBASE/templates/CM_config.xml.exit_3.include user exit file (for Linux/Unix):
<internal-apps-deploy-on-demand-enabled>false</internal-apps-deploy-on-demand-enabled>

Run the initialSetup utility to reflect the change. This configuration will be added to the Oracle WebLogic configuration.

Password Management solution for Oracle WebLogic


One of the common requests for an enhancement is the ability for users to change their application passwords from within the product. Typically password management is scoped outside the product's domain as it is considered infrastructure. This does not mean the product need not provide the interface to change the password, but it is the infrastructure's responsibility to provide a mechanism to change the passwords used in the security store.

76

Oracle Utilities Application Framework - Technical Best Practices

The issue becomes then if the infrastructure provides such an interface for the product to hook into. There are a number of patterns in this area: Customers implement an identity management solution to manage the passwords, expiry and rules. In this case the implementation needs to interface to the identity management solution by calling the appropriate facilities in the identity management solution around passwords. Of course, the J2EE Web Application Server used is then interfaced into the identity management solution or the related security store to provide the authentication mechanism. Customers link the security store for authentication directly to the security configuration of the J2EE Web Application Server. In this case, the J2EE Web Application Server provides the interface to the password change facility.

In the latter case, if you are a customer using Oracle WebLogic, there is an example JSP available under Oracle TechNet (registration required) under Code Samples (project S20) to allow an application to change the passwords, irrespective of the security used. This example can be altered to suit your sites standards and linked to the product as a custom JSP via a Navigation key to link to the appropriate menu.

Error configuring Oracle WebLogic credentials


When the product is installed with Oracle WebLogic, the security repository used by the environment is populated with an initial Administration System userid (usually system) to be used to create other credentials post installation. To use this user within Oracle WebLogic it must encrypted (along with the password) before it can be used. The installer calls a java class within Oracle WebLogic to encrypt this userid and password, but if the path to Oracle WebLogic is incorrect, specified in the WEB_SERVER_HOME (or WL_HOME 6) parameter the installer will return this error when attempting to encrypt the user:

<crit> Error occured while running java Dweblogic.RootDirectory=/splapp weblogic.security.Encrypt : Output is Exception in thread "main" java.lang.NoClassDefFoundError: weblogic/security/Encrypt Caused by: java.lang.ClassNotFoundException: weblogic.security.Encrypt Could not find the main class: weblogic.security.Encrypt. Program will exit.
To fix this issue set the WEB_SERVER_HOME using the configureEnv[.sh] i utility (or set WL_HOME) to access the appropriate security encryption classes.

WL_HOME is used by Oracle Utilities Application Framework V2.x. WEB_SERVER_HOME is used by Oracle Utilities Application Framework V4.x and above.

77

Oracle Utilities Application Framework - Technical Best Practices

Corrupted SPLApp.war
By default, the product installer uses archive mode for the product deployment (this is true for Oracle WebLogic and IBM WebSphere though in Oracle WebLogic expanded mode is also supported). When using archive mode the product utilities build the product into a set of J2EE WAR and EAR files prior to deployment. The WAR and EAR build is performed by the initialSetup[.sh] utility. Refer to the Server Administration Guides or Configuration and Operations Guides for the product for a detailed description of the options and operations supported by this utility. If, for any reason, the WAR or EAR files are not built completely, and are therefore are corrupted, then the product start may abort. This can manifest in a number of error messages depending on the nature of the corruption:

<info> ERROR: /splapp/applications/SPLApp.war war file does not exist. Problem with the environment. Exiting.
or

weblogic.management.DeploymentException: Unexpected end of ZLIB input stream at weblogic.application.internal.EarDeploymentFactory.findOrCrea teComponentMBeans(EarDeploymentFactory.java:189) To resolve this issue then rerun the initialSetup[.sh] utility to recompile the WAR and EAR files. Web Application Server Logs
In the Server Administration Guide or Operations and Configuration Guide for your product the product specific logs are outlined including the formats and location. Given the product runs within a J2EE Web Application Server, that server also has a set of configuration files that can be used for diagnostic information. The table below outlines the default set of J2EE Web Application Server log files:
TABLE 18 WEB APPLICATION SERVER LOGS ORACLE WEBLOGIC ($SPLEBASE/LOGS/SYSTEM) IBM WEBSPHERE ($WAS_HOME/PROFILES/APPSVR01/LOGS/<SERVER>)

myserver.log weblogic_current.log access.log

SystemErr.log SystemOut.log startServer.log exception.log activity.log

Refer to the J2EE Web Application Server documentation for details of the logs and their format.

78

Oracle Utilities Application Framework - Technical Best Practices

IBM WebSphere Specific Advice


The Oracle Utilities Application Framework supports both Oracle WebLogic and IBM WebSphere. Most of the J2EE Web Application server specific advice in this document pertains to Oracle WebLogic. This section outlines some advice specific to IBM WebSphere installations. Refer to http://publib.boulder.ibm.com/infocenter/pvcsensa/v7r0m0/index.jsp?topic=/com.ibm.wse.doc_7.0 .0/ts_common.html for common IBM WebSphere tips and techniques. Note: If your site does not use IBm WebSphere then ignore this section.
Class Loading Issues

By default IBM WebSphere loads its own classes ahead of any classes used by products running within WebSphere. If there is a conflict or a different version of the class the default behavior under IBM WebSphere then the IBM WebSphere versions are used and that may cause conflicts if the product uses a different version (such as a newer version of the class libraries). To avoid issues with the classes provided with IBM WebSphere and any Oracle Utilities Application Framework based product, it is highly recommended to set the class loadinf within IBM WebSphere to load parent (i.e. WebSphere) class libraries last. Note: The Oracle Utilities Application Framework does not include its own class loader as it uses the class loading options in the J2EE Web Application Server. To set this value, navigate to the Enterprise Applications [Web Enterprise Application Name] Manage Modules option within the IBM WebSphere console. Select Class Loader Order and then choose Classes loaded with local class loader first (parent last) to set the correct value. If this setting is not set then startup or runtime errors may occur similar to the one below:

[12/28/10 23:14:31:854 PST] 00000000 FfdcProvider W com.ibm.ws.ffdc.impl.FfdcProvider logIncident FFDC1003I: FFDC Incident emitted on /opt/IBM/WebSphere7064/AppServer/profiles/AppSrv01/logs/ffdc/server8_35c035c_10.1 2.28_23.14.31.7522146543044581884850.txt com.ibm.ws.webcontainer.webapp.WebApp.notifyServletContextCre ated 1341 [12/28/10 23:14:31:896 PST] 00000000 webapp E com.ibm.ws.webcontainer.webapp.WebApp notifyServletContextCreated SRVE0283E: Exception caught while initializing context: {0} java.lang.NoSuchMethodError: com/ibm/icu/math/BigDecimal.<init>(Ljava/math/BigDecimal;)V at com.splwg.base.support.sql.NumericSQLTypeHelper.getFromResult Set(NumericSQLTypeHelper.java:50)
JNDI Issues with EJB

79

Oracle Utilities Application Framework - Technical Best Practices

Note: Oracle Utilities Application Framework V4.x and above , uses Enterprise Java Beans (EJB) for the Business Application Server. This advice therefore only applies to that version. By default during the deployment process the product configuration settings within IBM WebSphere are set correctly. If there is an issue with the deployment, for any reason, typically the EJB definitions are the most likely to be set incorrectly. Typically an error similar to the one below is displayed:

12/28/10 23:14:40:039 PST] 00000000 WASSessionCor I SessionContextRegistry getSessionContext SESN0176I: Will create a new session context for application key default_host/ouaf/help [12/28/10 23:14:40:103 PST] 00000000 webcontainer I com.ibm.ws.wswebcontainer.VirtualHost addWebApplication SRVE0250I: Web Module null has been bound to default_host[*:9081,*:80,*:9444,*:5065,*:5064,*:443,*:9083]. [12/28/10 23:14:40:152 PST] 00000000 ApplicationMg A WSVR0221I: Application started: SPLWeb-server8 [12/28/10 23:14:40:176 PST] 00000000 CompositionUn A WSVR0191I: Composition unit WebSphere:cuname=SPLWeb-server8 in BLA WebSphere:blaname=SPLWeb-server8 started. [12/28/10 23:14:40:200 PST] 00000000 ContainerHelp E WSVR0501E: Error creating component com.ibm.ws.runtime.component.CompositionUnitMgrImpl@67a067a com.ibm.ws.exception.RuntimeWarning: javax.naming.NameAlreadyBoundException: The com.splwg.ejb.service.Service interface of the SPLServiceBean bean in the spl-servicebean-4.1.0.jar module of the SPLService-server8 application cannot be bound to the ouaf/servicebean name location. The com.splwg.ejb.liteservice.api.ServiceRemote interface of the TUGBULiteServiceBean bean in the spl-servicebean-4.1.0.jar module of the SPLService-server8 application has already been bound to the ouaf/servicebean name location.
To correct this the WAR/EAR files can either be rebuilt and redeployed using the initialSetup[.sh] utility or the target JNDI name definition for the default EJB module TUGBULiteServiceBean be set correctly (to <Web Context Root>/TUGBULiteServiceBean where <Web Context Root> is the context assigned for the environment URL [usually ouaf]).
CORBA Transient Security Errors

In IBM WebSphere a number of users are setup by the installation process in the initial setup. These users are: A user to administrate the product on the IBM WebSphere console (by default wasadmin). A user for the Web Application Server to securely connect to the Enterprise Java Beans on the Business Application Server (by default webjndi).

80

Oracle Utilities Application Framework - Technical Best Practices

If these users are not setup correctly (directly or indirectly) then the product will experience a org.omg.CORBA.TRANSIENT error thrown by IBM WebSphere. To correct this navigate to the Environment Naming CORBA Naming Service Users option and ensure both the users that are used above (in particular webjndi) have the following CORBA roles: Cos Naming Read Cos Naming Write Cos Naming Create Cos Naming Delete

User Profile Errors

The userid from the product is passed as part of the application context in each transaction between the browser client and the Web Application Server. If the security components are not configured correctly then an error stating No User profile found for user=' ' (though authenticated to web server as 'null') can occur. For example:

0000001a SystemOut O - 006177-10-1 2011-05-03 11:39:03,681 [WebContainer : 1] WARN (web.services.InitializeUserTag) No user profile found for user='' (though authenticated to web server as 'null') com.splwg.shared.common.ApplicationError: (Server Message) Category: 11001 Number: 902 Call Sequence: Program Name: InitializeUserService Text: User does not have Display Profile. Description: The current user does not have a valid Display Profile. Please refer to the Display Profile setting on the User record. Table: null Field: null at com.splwg.base.domain.web.InitializeUserService.read(Initiali zeUserService.java:71) at com.splwg.base.support.pagemaintenance.AbstractPageMaintenanc e.readItem(AbstractPageMaintenance.java:91)
To resolve it is important to ensure that the security configuration of IBM WebSPhere is correct. At a minimum the following should be enabled in the IBM WebSphere console in the relevant security section: Enable administrative security Enable application security Enable LTPA

81

Oracle Utilities Application Framework - Technical Best Practices

Business Application Server Best Practices


The Business Application Server is used by product to process the business logic. Whilst most of the advice for the Web Application Server can be reused with the Business Application Server there are a number of practices and general advice that is specific to this tier in the architecture.

Distributed or local installation


In Oracle Utilities Application Framework V2.2 and above, it is possible to implement the Business Application separately (known as a distributed installation) or have it co-located with the Web Application Server (known as a local installation). The product does not impose whether the local or distributed as it is the sites choice which architecture suits them. So how do you choose which of the options are appropriate to your site? The table below lists all the factors that need to be considered to assist in making that decision: Single Machine versus Multiple Machines To use the distributed installation it is recommended to distribute the tiers across different machines. Conversely a local installation can be used on a single machine. Both installation variations can support clustering (or managed servers). Network Overhead The distributed installation has a very slight overhead in terms of performance as there is a network overhead in communication between tiers. Administration overhead Existing customers have indicated that the distributed installation has some overhead in terms of administration. This is because you typically have more instances and machines to manage in a distributed installation. Architectural principles Some sites have indicated that they prefer distributed installation due to their corporate enterprise architecture principles where each tier is separated, specific tier level responsibilities are assigned to resources and the hardware (virtual or not) is optimized for the specific tier.

The Oracle Utilities Application Framework Architecture Guidelines whitepaper discusses the various architectures available with their individual advantages and disadvantages.

Number of Child JVMS


Note: Not all Oracle Utilities Application Framework based products require Child Java Virtual Machines (JVM). If the product uses requires COBOL based code to be executed then Child JVMs will be needed to perform the Java to COBOL interface.

82

Oracle Utilities Application Framework - Technical Best Practices

By default, there are two (2) COBOL based Child JVM's spawned by the product for each of the online, XAIApp and the background processing components of the product. This is the minimum recommended for availability and performance of the product in normal conditions. It is worth considering more instances of the Child JVM's if any of the following situations occur: The site has a large number of users (>800) which use a large proportion of the product over the business day. In this case there are a lot of potential calls to COBOL modules by different users and to avoid out of memory conditions it is important to have more child JVM's available. This situation can also be negated by the presence of more than one Web Application Server as each Web Application Server has its own Child JVM's. If the product functionality used at the site is across a majority of the product then the number of unique COBOL modules that may be called may be more than expected and extra Child JVM's may be required to avoid out of memory situations.

In most cases the default value for the number of Child JVM's is sufficient for most non-production situations. Refer to the Production Environment Configuration Guidelines for production level settings.

COBOL Memory management


The Child Java Virtual Machines (JVM) used to provide the Java to COBOL interface requires a number of key memory management features unique to the Oracle Utilities Application Framework. Typically COBOL (and other languages) runs natively on an operating system. In the case of the COBOL used by the Oracle Utilities Application Framework based product, it runs with a runtime set of libraries provided by Microfocus COBOL. The Oracle Utilities Application Framework wraps this COBOL in a JVM to facilitate the Java to COBOL interface. Unfortunately as COBOL typically assumes it is running natively so when running within a JVM the control of the COBOL process falls to the Oracle Utilities Application Framework which has limited control of the underlying processes. The COBOL processes (expressed as shared libraries and executables on the operating system) typically are attached to the JVM when they are first executed and remain attached as long as the JVM is executing for reuse. This has an unfortunate consequence in that the thread bound memory used by those COBOL objects cannot be released until the parent process (in this case the JVM) has stopped executing (i.e. dies). This thread-bound memory is primarily memory allocated by the Microfocus runtime on the C heap. As threads return to the thread pool and are used again to process calls to different COBOL objects, the memory footprint may continue to grow as different COBOL objects are called. Over time it may be the case that each thread allocates memory for the complete set of objects. If not managed correctly this situation can lead to out of memory conditions. As the Child JVM has limited control over individual object a number of key elements have been added to the Oracle Utilities Application Framework (that require configuration) to optimize memory management of the Child JVMs: Load is balanced across the available Child JVM's allocated to the product using a round robin technique to reduce the impact of memory increases.

83

Oracle Utilities Application Framework - Technical Best Practices

Child JVM's reuse existing loaded modules as much as possible. An individual module that has been called is only attached once per Child JVM at any given time. An installation parameter in the Environment Configuration called Release Cobol Thread Memory controls this behavior. This value should be set to true. This can be overridden for each mode of access (online, batch and XAI) by specifying the spl.runtime.cobol.remote.releaseThreadMemoryAfterEachCall parameter in the spl.properties file. Note: Refer to the Batch Best Practices for advice pertaining the optimal setting of this parameter for background processes.

To reclaim memory of the COBOL objects, the Child JVM must be shunned (stopped and restarted) on a regular basis. This is known as brute force memory management. The Oracle Utilities Application Framework allows control of this in the relevant spl.properties file by setting the following parameters:
PARAMETER COMMENTS

spl.runtime.cobol.remote.jvmMaxLifetimeSecs

Number of seconds between automated shunning of the Child JVM

spl.runtime.cobol.remote.jvmMaxRequests

Number of COBOL calls between automated shunning of the Child JVM

As soon as either tolerance is met the Child JVM is shunned automatically. This does not necessarily occur straightaway as it waits for any uncompleted outstanding work in the individual Child JVM to complete. As the product uses more than one Child JVM at any time, availability is not compromised as at least one Child JVM is active at any time. The default values for these parameters are sufficient for most sites. Refer to the Batch Operations and Configuration Guide/Batch Server Administration Guide and Operations and Configuration Guide/Server Administration Guide for your product for the default values and additional advice on this facility. With the above facilities the COBOL memory within the Child JVM can be managed by the Oracle Utilities Application Framework to help avoid memory issues.

Cache Management
One of the features of the Oracle Utilities Application Framework is the implementation of a level 2 cache within the architecture to provide performance benefits for commonly used configuration information. Generally the cache is managed by the Oracle Utilities Application Framework automatically with little or no interaction from operators. By default, the cache is reloaded as needed or every eight (8) hours, whichever occurs first. Some elements of the cache such as security information is refreshed on a more frequent basis (every 30 minutes).

84

Oracle Utilities Application Framework - Technical Best Practices

There are a number of cache management utilities to manually cause all or parts of the cache to refresh manually. These utilities are documented in the Operations and Configuration Guide/Server Administration Guide for your product. While these utilities are rarely used in production, they can be used, by appropriately authorized personnel to make sure the cache contains the correct information. Typically the manual refresh is required if the configuration data is changed and needs to be reflected as soon as possible.

Monitoring and Managing the Business Application Server using JMX


Note: This facility is available in Oracle Utilities Application Framework V4 and above only. In Oracle Utilities Application Framework Version 4.0 it is possible to enable JMX performance statistics to allow collection, management and monitoring of JVM information for the Business Application Server. For backward compatibility, the JMX enabled facilities are disabled by default. To use this facility you must execute the configureEnv utility with the a option (Advanced Menu) and specify the following settings:
TABLE 19 JMX SETTINGS FOR BUSINESS APPLICATION SERVER SETTING CONTENTS

JMX Enablement System Userid JMX Enablement System Password RMI Port for JMX Business

Userid used for logging onto JMX Mbeans Password to be used for JMX Enablement System Userid Port number to allocate to the JMX for the Business Application Server

This information is added to the spl.properties file in the etc/conf/service subdirectory for the environment, for the Business Application Server. An example of the applicable settings is shown below:

spl.runtime.management.rmi.port= spl.runtime.management.connector.url.default=service:jmx:rmi:// /jndi/rmi://host:/oracle/ouaf/ejbAppConnector ouaf.jmx.com.splwg.ejb.service.management.PerformanceStatistics =enabled jmx.remote.x.password.file=scripts/ouaf.jmx.password.file jmx.remote.x.access.file=scripts/ouaf.jmx.access.file


The following settings are important to the JMX monitor: The spl.runtime.management.connector.url.default is the JMX url to be used in the JMX console or JMX browser. The jmx.remote.x.password.file and jmx.remote.x.access.file are the default security setup for the JMX. These are for basic security setup. For more information about the files and alternative security setups refer to http://java.sun.com/javase/6/docs/technotes/guides/management/agent.html.

85

Oracle Utilities Application Framework - Technical Best Practices

The ouaf.jmx.* settings enable individual beans at startup time. These may be enabled at runtime.

Once the Business Application Server component is started; the JMX Mbeans defined in this configuration are started and a JSR160 compliant JMX console or JMX browser can be used to connect to the JMX Mbeans. The remote URL and credentials are provided as configured above. The only Mbean available with the Business Application Server is the PerformanceStatistics Mbean. This Mbean collects object performance data for analysis. For customer familiar with the Oracle Tuxedo product, this facility is similar to the txrpt facility available for performance analysis. The statistics are collected by the Mbean from the time the Mbean is enabled until the environment statistics are reset. By default, the Mbean is enabled at startup time but may be disabled (or re-enabled) at any time using the disableMbean or enableMbean operations from the PerformanceMbeanController Mbean. When using this Mbean there are a few recommendations: The completeExecutionDump operation returns a CSV of the performance statistics of individual application services to the JMX console or JMX browser. This represents the current state of the statistics at that time. The reset operation resets the statistics within the Mbean to start collection. This operation is handy to ensure performance over a selected period. There are other operations and attributes that return individual value information that may of interest. Refer to the Server Administration Guide provided with your product for a detailed description of what statistics are available.

Note: No confirmation (i.e. Are You Sure?) dialog is provided with most JMX consoles or JMX browser so care should be taken when issuing commands.
Replicating the txrpt statistics

One of the features customers of past releases of V1.x of Oracle Utilities Customer Care And Billing used to use to gather performance data was the txrpt facility within Oracle Tuxedo. The utility would take performance data gathered from every service call and produce summary statistics per hour. The statistics were the number of calls and the average response time for each defined service. The txrpt utility collected the statistics from log files that were enabled in the Oracle Tuxedo configuration. This information was useful in tracking the performance of individual services within the product against a sites's SLA. With the advent of Oracle Utilities Application Framework V2.x and the removal of Oracle Tuxedo from the architecture meant that this information was not available for collection as easily as originally. In Oracle Utilities Application Framework the implementation of the PerformanceStatistics Mbean allows for collection of performance information in a similar fashion txrpt. To achieve the same results as txrpt the following should be performed:

86

Oracle Utilities Application Framework - Technical Best Practices

On the hour boundary the completeExecutionDump operation must be executed by your JMX console or JMX browser to extract and save the CSV information to a file. The file should have the date and time of the collection for reference reasons. After collection of the statistics has been completed, the reset operation should be executed from your JMX console or JMX browser. The information in the files can be collated according to the desired analysis required by your site to summarize the information. The CSV can be loaded into a database for analysis or into your sites preferred spreadsheet or analysis tool. Remember that the date and time of the collection is not recorded in the data only the data itself.

Note: While this process can be manually done using a JMX console such as jconsole, it is recommended that the JMX console or JMX browser automate the collection of the process in the background. Refer to the documentation of the JMX console and JMX browser to configure your console or browser to achieve this. This facility is flexible for a number of reasons: The time period for collection is not limited to hourly as txrpt was. The time collection period can be increased or decreased according to your site standards. For example, you might want to collect the data every 10 minutes. The statistics are live and can be queried regardless of the collection process. The level of information is higher than the original txrpt. The following additional information is collected and summarized: The data is now also summarized by the type of transaction that is performed. This will allow the site to assess the performance of reads, updates, deletes, inserts etc separately. The last transaction recorded is detailed including the user. This information is useful for checking against other statistics to assess where the performance is at the present moment. Statistics are already calculated by the utility prior to analysis. The txrpt utility only collected the average. This facility collects the average, minimum (best case) and maximum (worst case) performance statistics in the collection period.

87

Oracle Utilities Application Framework - Technical Best Practices

Database Connection Management


Hibernate, c3p0 ( ) and UCP ( ) are used to provide a pool of connections to the database for the various components of the product. A separate pool exists for online , XAIApp and background processes. The size of the pool can be set in the hibernate.properties. The size of the pool can vary from mode of component to component with the following guidelines: The minimum pool size of the product should be set to the average number of connections needed for the mode of access. By default it is set to one (1) which is sufficient for nonproduction, but for each new connection required for the traffic the database connection needs to be established prior to use. The establishment of an individual database connection can cause delays to the transaction using the connection as it waits for the connection to be established. This negates the benefit of pooling connections. Track the number of connections used at normal traffic load and specify that as the minimum. This will establish the connections at startup time and avoid the overhead of creating connections on the fly. Ideally you want to avoid creating connections on the fly unnecessarily. The maximum pool value should be set to cover any peak load you may experience. Initially the values can be artificially inflated but after monitoring the number of connections open at peak times can optimize the value. The total number of database connections from all pools connecting to an individual database should not exceed the number of configured users/connections for that database. Exceeding the number of configuration users can cause database connection failures and delays in transactions. Typically customers have indicated that a good rule of thumb to use is that at any time one third of the defined users are active for normal traffic and two thirds are active at peak.

Note: This is a rule of thumb and may NOT apply to the traffic patterns at your site. It is recommended to start with an agreed value and then monitor to optimize the values as necessary. Refer to the Batch Operations and Configuration Guide/Batch Server Administration Guide and Operations and Configuration Guide/Server Administration Guide for your product for additional advice on this facility.

XPath Memory Management


Note: This facility is available for Oracle Utilities Application Framework V2.2 and above after installing patches 11885007 (for V2.2) or 12357553 for (V4.1). With the popularity of the Configuration Tools facility within the product for customer extension the increase load of XPath may cause memory issues under particular user transaction conditions (in particular high volume patterns). As with most technology in the Oracle Utilities Application Framework, the XPath statements used in the Configuration Tools are cached for improved performance. Increased load on the cache may cause memory issues at higher volumes.

88

Oracle Utilities Application Framework - Technical Best Practices

To minimize this the Oracle Utilities Application Framework has introduced two new settings in the spl.properties file for the Business Application Server, where the dimensions of the XPath statement cache are defined. These settings allow the site to optimize the control the XPath cache to support caching of commonly used XPath statements but allowing for optimal specification of the cache size (to help prevent memory issues). The settings are shown in the table below:
TABLE 20 XPATH CACHE SETTINGS SETTING USAGE

com.oracle.XPath.LRUSize

Maximum number of XPath queries to hold in cache across all threads. A zero (0) value indicates no caching, minus one (-1) value indicates unlimited or other positive values indicate number of queries stored in cache. Cache is managed on a Least Reused 7 basis. For memory requirements, assume approximately 7k per query). The default in the template is 2000 queries.

com.oracle.XPath.flushTimeout

The time, in seconds, when the cache is automatically cleared. A zero (0) value indicates never auto-flush cache and a positive value indicates the number of seconds. The default in the template is 86400 seconds (24 hours).

Note: The templates provided with the product have these settings commented out. To use the settings uncomment the entries in the generated configuration files. In most cases the defaults are sufficient but can be altered if the following is guidelines are: If there are memory issues (e.g. out of memory) then decreasing the LRUSize or decreasing the flushTimeout may result in a reduction in memory issues. LRUSize has a greater impact on memory than flushTimeout. If decreasing value the value of the LRUsize causes performance issues, consider changing the flushTimeout initially only and ascertain if that works for your site.

There are no strict guidelines on the value for both parameters as cache performance is subject to the user traffic profile and the amount and types of XPath queries executed. Experimentation will assist in determining the right mix of both settings for your site.

Database Best Practices


The Database Server is responsible for the storage and management of data. There are a number of practices that sites find useful for maintaining the health of the Database Server.

7 In laymans terms, older cached entries that are not reused are removed from the cache automatically to make roon for more used entries or new entries.

89

Oracle Utilities Application Framework - Technical Best Practices

Regularly Calculate Database Statistics


Database statistics are important for the performance of all SQL in the product. Keeping them up to date ensures the database has the most up to date information to make the appropriate access path decisions. When any table in the system grows (or shrinks) by a larger than normal rate, the access paths to that table may change causing inefficiencies. For the database to make the correct decision, it uses a set of statistics to assess all available paths. This is an important factor in performance. It is therefore recommended that database statistics be recalculated, using dbmsstats, on a regular basis to maintain up to date statistics. The frequency will depend on the volume and size of your database. It is recommended that statistics most tables be calculated once a week at minimum unless their growth factors do not affect the path chosen by the DBMS. Note: CISADM is used as an example in the guidelines below. If your site uses another schema owner, then substitute that owner in the examples below. The following guidelines can be used to assist: It is possible to check what is the Last Analyzed Date on product tables are current (or not) by running the following SQL.

SELECT table_name, last_analyzed FROM dba_tables WHERE owner = 'CISADM';


It is possible to check check what is the Last Analyzed Date on indexes are current (or not) by running the following SQL.

SELECT index_name, last_analyzed FROM dba_indexes WHERE owner = 'CISADM';


If the Indexes are older by a week or more than consider gathering Statistics on them. You can also use the below SQL which tells approximate number of INSERTs, UPDATEs, and DELETEs for that table, as well as whether the table has been truncated, since the last time statistics were gathered.

SELECT * FROM USER_TAB_MODIFICATIONS;


Note: The MONITORING attribute must be set on individual objects to use this facility. It is recommended to gather statistics while no active purging activities are occurring on the database.

90

Oracle Utilities Application Framework - Technical Best Practices

It is recommended to use the dbms_stats package for collecting statistics. An estimate percentage of 10 percent is generally sufficient. Set the degree parameter to a higher level to enable parallel collection of statistics. It is suggested to set the block_sample parameter to false. The Method option while gathering statistics on tables should be set to FOR ALL COLUMNS SIZE AUTO. This will make sure that Oracle automatically determines which columns require histograms and the number of buckets (size) of each histogram. Gathering statistics separately for indexes is generally faster than the cascade=true option while gathering table statistics. It is recommended to not collect statistics on all the tables at a single batch run at a single point of time. Dividing the tables into multiple groups and then executing statistics calculation for each group at different time frames will minimize any disruption due to statistics calculation. Depending on the stability of the query performance, it is suggested that the statistics collection frequency can be altered to maintain query performance.

A discussion on the statistics calculation is also discussed in Performance Troubleshooting Guide.

Ensure I/O is spread evenly across available devices


Ensuring I/O across devices is becoming less of a problem with progressive versions of the database handling this automatically and the introduction of intelligent SAN technology. One of the key practices that are key to performance of a database is the elimination of hot spots in the disk architecture by ensuring that I/O is spread across all available devices. This is known as the Database Topology. For example, placing the database physical files on a single disk is not optimal as multiple concurrent requests queue to use the disk and would result in higher than expected disk wait times. By spreading the load across disks, the opportunity for wait times is minimized and increases throughput. It is therefore recommended that the disk architecture be designed for the physical database files so that as much I/O as possible is spread across all disks. A discussion on the database topology and its implications is outlined in Performance Troubleshooting Guide.

Use the Correct NLS settings (Oracle)


One of the configuration settings that can affect the sorting and processing of date data is setting the correct language for the connection to the database. For Oracle customers it is recommended that the NLS setting is correct for your region is set correctly at installation time. Refer to the NLS documentation (Globalization Support Guide) for Oracle for details of the valid settings for your region.

91

Oracle Utilities Application Framework - Technical Best Practices

Note: Additionally for UTF8 customers ensure that the spl.runtime.cobol.encoding spl.properties file, is set correctly to display the correct character set.

in the

Monitoring database connections


Note: This facility is generally available for customers using Oracle Utilities Application Framework V4.0 and above. Facets of this are also available for customers using Oracle Utilities Application Framework V2.2 using Patch 10215923. By default, the product uses a common pool managed database user for accessing the database. Whilst in Oracle Utilities Application Framework V4.0 and above, it was possible to use a different user per access method (online, batch, etc) it is limited to a single user per access method. This can be limiting when trying to track individual sessions at the database level as the connections can be difficult to distinguish. It is possible to track individual connections using two attributes of the v$session system view: CLIENT_IDENTIFIER In Oracle Utilities Application Framework V4.0 and above, the application user used for the duration of the transaction is now placed in the CLIENT_IDENTIFIER for the duration of the transaction using the connection. For compatibility purposes, the short userid is placed in this column (not the Login Id). If the connection is idle, the column is blank. MODULE In Oracle Utilities Application Framework V2.2 and above, the module that is executing and using the database connection is now populated in the MODULE field of v$session. If the connection is active, the MODULE will contain the text TUGBU Idle to denote it as an idle connection using by the product. Note: To use the MODULE feature the hibernate.connection.release_mode must be set to on_close in the hibernate.properties file.

Consider changing Bit Map Tree parameter


Some sites have reported that in Oracle Database 10g and above, the default for the hidden oracle parameter _b_tree_bitmap_plans changed from false to true. Setting the parameter to true enables bitmap plans to be generated for tables with only B-Tree indexes. The Cost Based Optimizer can choose to use bitmap access paths without the existence of bitmap indexes and in order to do so; it uses BITMAP CONVERSION FROM ROWIDS and BITMAP CONVERSION TO ROWIDS operations. Those operations are CPU intensive. If a query in the product for which those operations are performed selects a small number of rows, then there should not be much of an impact. However, if those queries select a large number of rows, there may be a negative impact on performance. In order to prevent issues, if you are facing any such issues, this parameter should be explicitly set to false either at the database level.

92

Oracle Utilities Application Framework - Technical Best Practices

OraGenSec command line Parameters


Most sites use the OraGenSec utility in interactive mode but there are command line options that can be used for silent installation. The command line is as follows: -d <Owner,OwnerPswd,DbName> <ReadRole,UserRole> -l <logfile> -h OraGenSec Where: -d <Owner,OwnerPswd,DbName> -u <Database Users> Database connect information for the target database. e.g. spladm,spladm,DB200ODB. A comma-separated list of database users where synonyms need to be created. e.g. spluser, splread Optional. Names of database roles with read and read-write privileges. Default roles are SPL_READ, SPL_USER. e.g. spl_read,spl_user Optional. Name of the log file. Help -u

<Database

Users>

-r

-r <ReadRole,UserRole>

-l <logfile> -h

This command line can be used in site specific DBA scripts or as a standalone command line. Executing the utility without any options starts interactive mode.

SetEnvId command line Parameters


Each environment at a site can have an unique environment identifier (if desired). This identifier is used by numerous utilities for environmental management such as ConfigLab or Archiving. While setting this value can be done as part of the installation process, it is also possible after installation. Typically, DBA's run the SetEnvId utility provided with the installation media to set the environment identifier interactively. As with OraGenSec this can also be executed on the command line with various options to support silent installation. The command line is as follows: SetEnvID -d <Owner,OwnerPswd,DbName> -r -u -q -l <logfile> -h Where: -d <Owner,OwnerPswd,DbName> -u Database connect information for the target database. e.g. spladm,spladm,DB200ODB. Apply the generated SQL to the databases. By default, the generated SQL is written to the log file only. Reset the Environment identifiers. This is to

-r

93

Oracle Utilities Application Framework - Technical Best Practices

overwrite the existing environment identifiers -q -l <logfile> -h Silent Installation mode (no confirmations) Optional. Name of the log file. Help

This command line can be used in site specific DBA scripts or as a standalone command line. Executing the utility without any options starts interactive mode.

Building the Data Model


One of the common questions regarding product is the availability of the total data model in a particular tool (such as Oracle Data Modeler or similar). The product contains a large number of tables and it is generally impractical to display a full model due to its size (to make it legible). There are a number of sources of information that can replace a full data model and present the data mode information into bite sized chunks: The data model information is contained in the Data Dictionary component of the Application Viewer. The Conversion documentation, available in Microsoft Word as well as online help contain a summary set of data models that basically outline the major entities in the product. Note: Not all Oracle Utilities Application Framework products include a conversion capability Each of the Business Process manuals for the product outlines the functionality and contain data models specifically for that component.

Why is there no referential integrity built into the database?

Note: The scripts in this section have been designed for Oracle database only. Sites using DB2 or SQL Server should use the language equivalents of these scripts. Typically referential integrity of a database is managed by the database itself. In product this is not so as the Maintenance Objects contain ALL the business logic including referential integrity. The reasons for this are varied: From a maintenance cost point of view, all the code is in one place. This reduces maintenance effort. Databases implement all or nothing referential integrity. This means that referential integrity is checked whether the data has changed or not. From a performance point of view this is potentially wasting time. The Maintenance Objects in product decide when to enforce referential integrity rules. Most of the referential rules in product are optional. If there is a value in the foreign key field it is checked, if there is no value (blanks, zero or nulls) then the referential integrity is not

94

Oracle Utilities Application Framework - Technical Best Practices

checked unless it is a mandatory column. This is not possible in database imposed referential integrity. If the database controlled referential integrity then the application has no control on when it is imposed in the course of a transaction. Maintenance Object controlled referential integrity allows finer levels of control on when referential integrity is enforced in the transaction flow. Each database implements referential integrity in a slightly different way. To reduce maintenance costs, code differences are kept to a minimum. Maintenance Object enforced referential integrity is more efficient as far as product is concerned and translates to superior performance across many database types.

Building the Data Model

All is not lost though. The Oracle Utilities Application Framework maintains its own data dictionary in the form of meta-data that is used by the Oracle Utilities Software Development Kit, ConfigLab and Archiving. If you insist that you want the data model in a tool or adorning a large wall then the following is recommended process to be used to generate the data model using the meta-data: Export the CISADM schema as a backup using the database export utility. Create constraints from the meta-data structure. The two Oracle pl/sql scripts below can be used to achieve this. The names of the constraints is already documented in the meta data as well. Run the utility and created the constraints in the database.

Function to join

create or replace function join ( p_cursor sys_refcursor, p_del varchar2 := ',' ) return varchar2 is l_value varchar2(32767); l_result varchar2(32767); begin loop fetch p_cursor into l_value; exit when p_cursor%notfound; if l_result is not null then l_result := l_result || p_del; end if; l_result := l_result || l_value; end loop; return l_result; end join; / show errors;

95

Oracle Utilities Application Framework - Technical Best Practices

Script to Create Constraints

SET serverout ON size 1000000 SET echo OFF SET feedback OFF SET linesize 300 spool constraints.sql DECLARE CURSOR c1 IS SELECT tbl_name, CONST_ID, REF_CONST_ID, table_name FROM ci_MD_CONST, user_indexes WHERE CONST_TYPE_FLG='FK' AND TRIM(index_name)=SUBSTR(REF_CONST_ID,5,7) AND TRIM(tbl_name) IN (SELECT TRIM(table_name) FROM user_tables ) ORDER BY tbl_name, CONST_ID; stmt VARCHAR2(400); field_list VARCHAR2(300); BEGIN FOR r1 IN c1 LOOP stmt := 'alter table ' || trim(r1.tbl_name) || ' add constraint ' || trim(r1.const_id); dbms_output.put_line(stmt); SELECT JOIN(CURSOR (SELECT trim(fld_name) FROM ci_md_const_fld WHERE const_id = r1.const_id ORDER BY seq_num )) INTO field_list FROM dual; stmt := 'foreign key (' || field_list || ')'; dbms_output.put_line(stmt); SELECT JOIN(CURSOR (SELECT trim(fld_name) FROM ci_md_const_fld WHERE const_id = r1.ref_const_id ORDER BY seq_num )) INTO field_list FROM dual; stmt := 'references ' || trim(r1.table_name) || ' (' || field_list || ');'; dbms_output.put_line(stmt); END LOOP;

96

Oracle Utilities Application Framework - Technical Best Practices

END; / spool OFF; EXIT; Empty a copy of the database (truncated the tables). None of the relationships are expressed as constraints in the physical database; this is because ALL the referential integrity (RI) and validation is done in the code based Maintenance Objects. More importantly most of the constraints are data conditional (if there is data in the column, then RI applies; no value, no RI) so a loaded database might actually break "database strict" RI rules. Remove the data to prevent constraint violations using a valid method for the database (for example, TRUNCATE TABLE <tablename> REUSE STORAGE for Oracle et al).
Run the constraints.sql file created in the previous step to create the RI using the CISADM user. Load the data model in the tool of your choice. Load the data model with the constraints in the desired tool. This should build the data model. Note: This may take a while for the WHOLE data model. Removed the newly created constraints. This is to return the database back to the original condition.

set serverout on size 1000000 set echo off set feedback off set linesize 300 spool drop_constraints.sql select 'ALTER ' || tbl_name || ' drop constraint ' || CONST_ID || ';' from ci_MD_CONST where CONST_TYPE_FLG='FK' order by tbl_name, CONST_ID; spool off; @drop_constraints.sql exit; Reload the database. You then have the data model in your tool and the database returned to its original state.

97

Technical Best Practices June 2011 Author: Anthony Shorten, Principal Product Manager Oracle Corporation World Headquarters 500 Oracle Parkway Redwood Shores, CA 94065 U.S.A. Worldwide Inquiries: Phone: +1.650.506.7000 Fax: +1.650.506.7200 oracle.com

Copyright 2007-2011, Oracle and/or its affiliates. All rights reserved. This document is provided for information purposes only and the contents hereof are subject to change without notice. This document is not warranted to be error-free, nor subject to any other warranties or conditions, whether expressed orally or implied in law, including implied warranties and conditions of merchantability or fitness for a particular purpose. We specifically disclaim any liability with respect to this document and no contractual obligations are formed either directly or indirectly by this document. This document may not be reproduced or transmitted in any form or by any means, electronic or mechanical, for any purpose, without our prior written permission. Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners. AMD, Opteron, the AMD logo, and the AMD Opteron logo are trademarks or registered trademarks of Advanced Micro Devices. Intel and Intel Xeon are trademarks or registered trademarks of Intel Corporation. All SPARC trademarks are used under license and are trademarks or registered trademarks of SPARC International, Inc. UNIX is a registered trademark licensed through X/Open Company, Ltd. 1010

S-ar putea să vă placă și