Documente Academic
Documente Profesional
Documente Cultură
Contains:
PeopleSoft Batch Performance Tips Database Tuning Tips SQL Query Tuning Tips Use of Database Features Capturing Traces Prepared by: Jayagopal Theranikal
Comments on this document can be submitted to redpaper@peoplesoft.com. We encourage you provide feedback on this Red Paper and will ensure that it is updated based on feedback received. When you send information to PeopleSoft, you grant PeopleSoft a non-exclusive right to use or distribute the information in any way it believes appropriate without incurring any obligation to you. This material has not been submitted to any formal PeopleSoft test and is published AS IS. It has not been the subject of rigorous review. PeopleSoft assumes no responsibility for its accuracy or completeness. The use of this information or the implementation of any of these techniques is a customer responsibility and depends upon the customer's ability to evaluate and integrate them into the customer's operational environment.
Table of Contents
TABLE OF CONTENTS ................................................................................................................................... 3 CHAPTER 1 - INTRODUCTION ........................................................................................................................ 5 Structure of this Red Paper Related Materials 5 5
CHAPTER 2 - PEOPLESOFT BATCH PERFORMANCE TIPS............................................................................ 6 Table and Index Statistics 6 Gather Statistics .............................................................................................................................................................................................6 Statistics at Runtime for Temporary Tables ..............................................................................................................................................7 Dedicated Temporary Tables 8 What Are Dedicated Temporary Tables?..................................................................................................................................................8 Sizing the Dedicated Temporary Tables ....................................................................................................................................................9 Create Them as Oracle Global Temporary Tables (GTT) -- Not Advisable for Now ....................................................................9 Tablespace Selection 11 Dictionary-Managed Tablespaces............................................................................................................................................................13 Locally Managed Tablespaces..................................................................................................................................................................13 Temporary Tablespaces .............................................................................................................................................................................15 Index Validation 15 Index Maintenance Tips.............................................................................................................................................................................15 Function-Based Indexes .............................................................................................................................................................................16 Key Compression ........................................................................................................................................................................................17 Stored Outlines 20 What Are Stored Outlines?........................................................................................................................................................................20 When to Use Outlines ................................................................................................................................................................................20 Using Outlines to Swap Execution Plans.................................................................................................................................................20 Table/Index Partitioning 22 What Is Partitioning?..................................................................................................................................................................................22 Partitioning Methods..................................................................................................................................................................................22 Partitioned Indexe s ......................................................................................................................................................................................23 Advantages of Partitioning........................................................................................................................................................................24 Rollback Segments for Batch and Online 24 Online ............................................................................................................................................................................................................24 Batch..............................................................................................................................................................................................................25 Parses vs. Executes 26 Use of Bind Variables..................................................................................................................................................................................26 Histograms 32 What Are Histograms?...............................................................................................................................................................................32 Use of Histograms for PeopleSoft Applications.....................................................................................................................................33 Creating Histograms ....................................................................................................................................................................................36 Choosing the Number of Buckets for a Histogram.................................................................................................................................36 Viewing Histograms ....................................................................................................................................................................................36 Operational Guidelines for Maintaining Histograms in Oracle .............................................................................................................37 3
FAQ on Histograms ....................................................................................................................................................................................37 Batch Server Selection 38 Scenario 1: Process Scheduler and Application Server on one BOX..................................................................................................38 Scenario 2: Process Scheduler and Database Server on one BOX ......................................................................................................39 What is the Recommended Scenario?......................................................................................................................................................40 CHAPTER 3 - CAPTURING TRACES ..............................................................................................................41 Application Engine Trace Online Trace 41 41
Oracle Trace 42 Trace at Instance Level:..............................................................................................................................................................................42 Trace at Session Level:...............................................................................................................................................................................42 Trace for different session : .......................................................................................................................................................................43 TKPROF 43
STATSPACK 43 Installing and Using STATSPACK ..........................................................................................................................................................44 CHAPTER 4 -DATABASE TUNING AND INIT.ORA PARAMETERS ...................................................................46 Recommendations 46 Block Size ......................................................................................................................................................................................................46 Shared Pool Area.........................................................................................................................................................................................47 Data Dictionary Hit Ratio ...........................................................................................................................................................................47 Buffer Busy Waits .......................................................................................................................................................................................47 LRU Latch.....................................................................................................................................................................................................48 Log Buffer.....................................................................................................................................................................................................48 Tablespace I/O.............................................................................................................................................................................................48 Full Table Scans...........................................................................................................................................................................................48 Checkpoints..................................................................................................................................................................................................49 Dynamic Allocation of Extents ..................................................................................................................................................................49 PCTFREE/PCTUSED ...................................................................................................................................................................................49 Rebuilding Indexes ......................................................................................................................................................................................50 Sorting...........................................................................................................................................................................................................50 APPENDIX A SPECIAL NOTICES ................................................................................................................51 APPENDIX B VALIDATION AND FEEDBACK................................................................................................52 Customer Validation Field Validation 52 52
APPENDIX C - REFERENCES .........................................................................................................................53 APPENDIX D REVISION HISTORY...............................................................................................................54 Authors .........................................................................................................................................................................................................54 Reviewers......................................................................................................................................................................................................54 Revision History ..........................................................................................................................................................................................55
9/26/2002
Chapter 1 - Introduction
This Red Paper is a practical guide for technical users, database administrators, and programmers who implement, maintain, or develop applications for a PeopleSoft system. In this Red Paper, we discuss guidelines on how to improve the performance of PeopleSoft 8 Batch processes in the Oracle8i environment and make necessary comments for Oracle 9i environment. Much of the information contained in this document originated within the PeopleSoft Benchmarks and Global Support Center and is therefore based on real-life problems encountered in the field. The issues that appear in this document are the problems that prove to be the most common or troublesome.
RELATED MATERIALS
This paper is not a general introduction to environment tuning and we assume that our readers are experienced IT professionals, with a good understanding of PeopleSofts Internet architecture and Oracle database. To take full advantage of the information covered in this document, we recommend that you have a basic understanding of system administration, basic Internet architecture, relational database concepts/SQL, and how to use PeopleSoft applications. This document is not intended to replace the documentation delivered with the PeopleTools 8 or 8.4 PeopleBooks. We recommend that before you read this document, you read the PeopleSoft application-related information in the PeopleBooks to ensure that you have a well-rounded understanding of PeopleSoft batch process technology. Note: Much of the information in this document eventually gets incorporated into subsequent versions of the PeopleBooks. The fundamental concepts related to performance tuning in PeopleBooks include the PeopleSoft Installation Guide Oracle Tuning chapter. Additionally, we recommend that you read the Oracle8i database administration guide.
9/26/2002
Gather Statistics
Oracle8i introduced a new package DBMS_STATS to run the statistics. The DBMS_STATS package provides the ability to generate statistics in parallel by specifying the degree of parallelism. The ability to generate statistics in parallel significantly reduces the time needed to refresh object statistics. Create SQL scripts to gather table-level or schema-level statistics and run them periodically.
SQL> EXECUTE DBMS_STATS.GATHER_SCHEMA_STATS (OWNNAME => 'SYSADM' , ESTIMATE_PERCENT => 20 , DEGREE => 5 , CASCADE => TRUE); SQL> EXECUTE DBMS_STATS.GATHER_DATABASE_STATS (ESTIMATE_PERCENT => 20 , DEGREE => 5 , CASCADE => TRUE); With the value of TRUE for CASCADE parameter the associated indexes will also be analyzed. The default setting for CASCADE is FALSE. Prefer DBMS_STATS over ANALYZE command to get faster table statistics. Note: Specifying the DEGREE will only help the tables (Partitioned or Non-Partitioned) to run in parallel. The index statistics cannot make use of this flag and do not run in parallel.
9/26/2002
Example
Command in SQL Step of an Application Engine program: %UpdateStats(INTFC_BI_HTMP) This meta-SQL will issue "ANALYZE TABLE PS_INTFC_BI_HTMP ESTIMATE STATISTICS" command to the database at runtime. Note: PeopleSoft stores the default syntax for the ANALYZE command in a table PSDDLMODEL. Use the supplied script (DDLORA.DMS) to change the default setting or to add a required SAMPLE ROWS/PERCENT for the ESTIMATE clause. Make sure the temporary table statistics have been handled as shown above. If you find any temporary table that was not updated during the run time, then plan to use manual method of updating the statistics.
4.
Note: If the schema-level statistics are run using the DBMS_STATS.GATHER_SCHEMA_STATS, then the previously captured statistics will be erased. In such cases, you may wish to turn on the %UpdateStats again or import the statistics for those tables from previously saved statistics using DBMS_STATS.IMPORT_TABLE_STATS command. Update Statistics can be turned off in two ways.
9/26/2002 1. Program level: Identify the steps that issue %UpdateStats and inactivate them. These steps can be identified by the AE trace. This is a program-specific setting. Installation Level: Once the batch-process runs are stabilized and the temporary-table statistics are captured for all the batch process, then the installation-level setting can be applied to turn off the %UpdateStats. Following parameter should be set in the Process Scheduler configuration to achieve this:
2.
psprcs.cfg: ;------------------------------------------------------------------------; DbFlags Bitfield ; ; Bit Flag ; -----; 1 - Ignore metaSQL to update database statistics(shared with COBOL) DbFlags=1
9/26/2002 Following is the property window for the AE program Bill Finalization (BIIF0001): Instance count specified above is the limit on number of temporary tables instances that can be used when multiple instance of the program is run. If the number of programs run are more than the specified count (10 in this example), then the additional processes will be abandoned or the base temporary tables will be used depending on the Runtime radio button selection in the above window.
2. 3.
Create Them as Oracle Global Temporary Tables (GTT) -- Not Advisable for
Now What Are Global Temporary Tables?
Oracle8i introduced global temporary tables, which can be used as temporary processing tables for any batch process. Instances of a global temporary table will be created at the runtime in the user's Temporary Tablespace. These tables are session-specific. Tables are dropped once session is closed. During the table creation time, it gives the option to preserve or delete the rows after the commit. Some advantages of using the Oracle Global Temporary Tables in place of Dedicated Temporary Tables. 1. 2. 3. 4. Reduction in redo. Faster full scansHigh Water Mark is always set to zero at the start of the process. Faster TruncatesSpace management occurs inside the temporary segment Easier table managementNo need to create the entire temporary table instances up front. Base table definition is stored once.
Some disadvantages with these Global Temporary Tables as of Oracle 8.1.7. 1. Table statistics run on these tables do not have any effect. Optimizer treats them as no statistics only. This will impact the access paths and execution times. These tables are dynamically created in users Temporary Tablespace. Temporary table sizing should be done properly to avoid any runtime errors due to lack of extents.
2.
9/26/2002
6. 7.
8.
9.
10
9/26/2002
TABLESPACE SELECTION
As of Oracle8i there are various types of tablespaces to use. Tablespaces can be created in multiple ways, but each type is good for a specific purpose. It is more confusing to choose a right type for each requirement. Though there are multiple options available to create a tablespace, only certain combinations of those options are valid. The following illustration and table gives a recommended use of various combinations.
11
9/26/2002
Tablespace Types
Datafile Based
Tempfile Based
Temporary Type
Permanent Type
Temporary Type
Dictionary Managed
Locally Managed
Locally Managed
Auto Allocate
12
9/26/2002
Tablespace Type
PeopleSoft Objects
Datafile Based, Regular SYSTEM Tablespace in Oracle Tablespace, Dictionary Managed 8i. Oracle 9.2 onwards SYSTEM Tablespace can be created as Locally Managed. Tablespace, Locally Managed, Auto Allocate All the Data Tables and Indexes Rollback Tablespace, Temporary Tables, Data Tables and Indexes NOT RECOMMENDED TO USE Users Default Temporary Tablespace
TS_PERM_LOC_UNI
TS_TEMP_LOC_UNI
Dictionary-Managed Tablespaces
These are the regular tablespaces and are datafile-based. Extent management is done at the dictionary level. Used defined extent management is allowed for each object created under such tablespace. Sample syntax:
CREATE TABLESPACE TS_PERM_DICT size 100M Datafile '/perm/ora/ts_perm_dict.dbf' Default storage (INITIAL 250K, NEXT 500K, PCTINCREASE 0)
13
9/26/2002
Space Management
Free extents recorded in bitmap (so some part of the tablespace is set aside for bitmap) Each bit corresponds to a block or group of blocks Bit value indicates free or unused Common views used are DBA_EXTENTS and DBA_FREE_SPACE
CREATE TABLESPACE TS_PERM_LOC_AUTO size 100M Datafile '/perm/ora/ts_perm_loc_auto.dbf' EXTENT MANAGEMENT LOCAL AUTO ALLOCATE;
With this option, extent size allocation is done by the system. It will not be possible to predict the extent size for each table and will be difficult to do capacity planning. If you want predictable extent sizes then you shouldn't use AUTOALLOCATE.
CREATE TABLESPACE TS_PERM_LOC_UNI size 100M Datafile '/perm/ora/ts_perm_loc_uni.dbf' EXTENT MANAGEMENT LOCAL UNIFORM SIZE 500K;
Uniform extent gives best predictability and consistency. Having the consistent extent size eliminates wastage of tablespace as "holes". It will be easier for the DBA to do capacity planning. Use this as a preferred method of extent management for all the tablespaces. Proper planning should be done to determine the optimum extent size. Plan on creating different category tablespaces such as small, medium, and large with different uniform extent sizes. Place the tables in appropriate tablespace depending on its size.
14
9/26/2002
Temporary Tablespaces
Every database user should be assigned a default temporary tablespaces to handle the data sorts. In Oracle8i it is possible to assign a regular tablespace as a temporary tablespace, it is advisable to use one of the following types for better management of temporary segments. Starting from Oracle9i regular tablespace cannot be assigned as the temporary tablespace, as it is flagged as an error when the tablespace assigned is not a true Oracle temporary tablespace.
Datafile -Based
These are regular tablespaces with an additional setting as TEMPORARY at the end of the command. These temporary tablespaces should only be used for temporary segments. This will make sure the permanent objects are not created by accident also.
CREATE TABLESPACE TS_PERM_DICT_TEMP size 100M Datafile '/perm/ora/ts_perm_dict_temp.dbf' Default storage (INITIAL 250K, NEXT 500K, PCTINCREASE 0) TEMPORARY; Tempfile -Based
Oracle introduced this new type that used tempfile instead of datafile. This should be a preferred method for any Temporary Tablespace. This will give better extent management and space management than the datafile based ones. In this type of tablespace, only the Locally Managed with UNIFORM EXTENT management is allowed.
CREATE TEMPORARY TABLESPACE TS_TEMP_LOC_UNI size 100M tempfile '/temp/ora/ts_temp_loc_uni.dbf' EXTENT MANAGEMENT LOCAL UNIFORM SIZE 500K;
INDEX VALIDATION
PeopleSoft-supplied indexes are of a generic nature. Depending on the customer's business needs and data composition the need for indexes varies. The following tips will help the DBA manage indexes efficiently.
2.
15
9/26/2002 Caution: Sufficient research and testing by an experienced DBA is required prior to making any such changes in a production environment. Result of a poor choice could be fatal to performance. As of Oracle 9i the new index scan INDEX SKIP SCAN will help to use the INVOICE column even if the column is the second one in the order. It may not be necessary to flip the index order in such cases. 3. 4. Consider adding additional indexes depending on your processing needs. Review the index-recommendation document supplied by the product to see if any of the suggestions apply to your installation. Examine the available indexes and remove any of the unused indexes to boost the performance of INSERT/UPDATE/DELETE operations. Sometimes, the unused index in a batch process may be useful for an online page. Do thorough analysis before deleting the index. It may impact another program. Indexes tend to fragment more frequently than tables. Rebuild the indexes frequently to boost the index performance.
5.
6.
Function-Based Indexes
A function-based index is an index on an expression, such as an arithmetic expression, or an expression containing a package function. Test case: Table PS_CUSTOMER has an index PS0CUSTOMER with NAME1 as leading column: SELECT SETID,CUST_ID,NAME1 FROM PS_CUSTOMER WHERE NAME1 LIKE 'Adventure%'; SQL> SETID CUST_ID NAME1
Uses index PS0CUSTOMER and return the result faster. SELECT SETID,CUST_ID,NAME1 FROM PS_CUSTOMER WHERE NAME1 LIKE 'ADVENTURE%'; SQL> No rows selected Uses index PS0CUSTOMER and return the result faster. But, gives no rows.
If data is stored in mixed case such as the above example, the only way to get the result is using the function UPPER.
SELECT SETID,CUST_ID,NAME1 FROM PS_CUSTOMER WHERE UPPER(NAME1) LIKE 'ADVENTURE%'; 16
Does not use the PS0CUSTOMER index and takes longer time to return.
Key Compression
Beginning with Oracle8i, there is a new index COMPRESS option to enable key compression, which eliminates repeated occurrence of key-column values and may substantially reduce storage. This is applicable for btree index and IOT.
Negative Impact
The key compression is made each block by each block, and only at the leaf level. The performance by index scan will decrease because Oracle must translate the <prefix, suffix> part in corresponding key.
17
9/26/2002 Oracle compresses only single-column indexes that are non-unique or unique indexes of at least two columns. You cannot specify COMPRESS for a bitmap index.
SQL> Select count(*) from ps_customer; COUNT(*) ---------300165 Elapsed: 00:00:00.53 SQL> SQL> set echo off =========================== CREATE REGULAR UNIQUE INDEX =========================== SQL> SQL> drop index ps_customer; Index dropped. Elapsed: 00:00:00.61 SQL> SQL> create unique index ps_customer on ps_customer(setid,cust_id)tablespace psindex; Index created. Elapsed: 00:00:14.82 SQL> SQL> select index_name, uniqueness, compression from user_indexes where index_name = 'PS_CUSTOMER'; INDEX_NAME UNIQUENESS COMPRESSION --------------- --------------------------- -----------------------PS_CUSTOMER UNIQUE DISABLED Elapsed: 00:00:00.14 SQL> SQL> analyze index ps_customer validate structure; Index analyzed. Elapsed: 00:00:00.69 SQL> SQL> select name,used_space from index_stats where name like 'PS_CUSTOMER'; NAME USED_SPACE 18
9/26/2002 --------------- ---------PS_CUSTOMER 9640794 Elapsed: 00:00:00.10 SQL> SQL> select cust_id,setid,name1 from ps_customer where setid='SHARE' and cust_id='GA_000000000002'; CUST_ID SETID NAME1 ------------------ -------- --------------GA_000000000002 SHARE GA Customer 1 Elapsed: 00:00:00.05 SQL> SQL> set echo off =================================== CREATE KEY COMPRESSION UNIQUE INDEX =================================== SQL> SQL> drop index ps_customer; Index dropped. Elapsed: 00:00:00.56 SQL> create unique index ps_customer on ps_customer(setid, cust_id) compress 1 tablespace psindex; Index created. Elapsed: 00:00:12.81 SQL> select index_name, uniqueness, compression from user_indexes where index_name = 'PS_CUSTOMER'; INDEX_NAME UNIQUENESS COMPRESSION --------------- --------------------------- -----------------------PS_CUSTOMER UNIQUE ENABLED Elapsed: 00:00:00.14 SQL> analyze index ps_customer validate structure; Index analyzed. Elapsed: 00:00:00.80 SQL> select name,used_space from index_stats where name like 'PS_CUSTOMER'; NAME USED_SPACE --------------- ---------PS_CUSTOMER 7833191 Elapsed: 00:00:00.09 SQL> select cust_id,setid,name1 from ps_customer where setid='SHARE' and cust_id='GA_000000000002'; CUST_ID SETID NAME1 ------------------ -------- --------------GA_000000000002 SHARE GA Customer 1
19
STORED OUTLINES
What Are Stored Outlines?
Oracle introduced outlines in Oracle8i to allow you to have a pre-defined execution plan for a SQL statement. Consistency can then be provided without changing the actual SQL. An outline is nothing more than a stored execution plan that Oracle uses rather than computing a new plan based on current table statistics. Before you can use outlines, you must record some. You can record outlines for a single statement, for all statements issued by a single session, or for all statements issued to an instance.
DBA Tasks
alter system set use_stored_outlines = true; grant create any outline to <tuner userid>; grant alter any outline to <tuner userid>; grant drop any outline to <tuner userid>; grant alter system to <tuner userid>; grant select, update, delete on outln.ol$ to <tuner userid>; grant select, update, delete on outln.ol$hints to <tuner userid>;
20
9/26/2002
Tuner Tasks
A) View existing outlines select ol_name from outln.ol$ order by timestamp; 1) Capture the outline of a SQL statement alter system set create_stored_outlines = true; -- run the SQL statement (e.g. PS Query with a 2-tier connection) alter system set create_stored_outlines = false; Note: The SQL statement does not have to run to completion before turning off the creation of stored outlines. Only the parsing of the statement must complete to get the outline. It is recommended to kill the SQL statement after parsing is complete, to avoid taxing the database and creating an unmanageable number of outlines for other SQL statements running in the system at the time. 2) Isolate the outline of the SQL statement from the outlines of other running statements select ol_name, sql_text from outln.ol$ where ol_name like 'SYS%'; -- manually scan the rows of newly created outlines for the SQL statement alter outline <system-generated outline name for the SQL statement> rename to <query name>_ORIG; select 'drop outline ' || ol_name || ';' from outln.ol$ where ol_name like 'SYS%'; -- run the output of the statement above to drop the outlines of other SQL statements that were running during the outline creation phase 3) Manually create an outline for the tuned SQL statement create outline <query name> on <tuned SQL statement>;
4) Swap the execution plan of the original outline with the tuned outline select ol_name, hintcount from outln.ol$ where ol_name in (<tuned outline name>', '<original outline name>'); update outln.ol$ set ol_name = 'TO_DEL', hintcount = <hintcount of original outline returned above> where ol_name = '<tuned outline name>'; update outln.ol$hints set ol_name = 'TO_DEL' where ol_name = '<original outline name>'; drop outline TO_DEL; update outln.ol$ set ol_name = '<tuned outline name>', hintcount = <hintcount of tuned statement> where ol_name = '<original outline name>';
Copyright PeopleSoft Corporation 2001. All rights reserved.
21
9/26/2002
5) Test the outline alter system flush shared_pool; select hash_value from outln.ol$ where ol_name = '<outline name>'; -- run the SQL statement (e.g. PS Query in 2-, 3-, or 4-tier)
select a.outline_category, a.hash_value, a.first_load_time, a.loads, a.executions, a.optimizer_cost, b.username from v$sql a, all_users b where a.parsing_user_id = b.user_id and a.hash_value = '<hash value returned above>' order by first_load_time desc; -- (non-NULL outline_category returned above means outline is being used) 6) Copy the outline to another envi ronment set long 10000; copy to <userid>/<password>@<dbname> insert outln.ol$ using select * from outln.ol$ where ol_name = '<outline name>'; copy to <userid>/<password>@<dbname> insert outln.ol$hints using select * from outln.ol$hints where ol_name = '<outline name>';
TABLE/INDEX PARTITIONING
What Is Partitioning?
Partitioning addresses the key problem of supporting very large tables and indexes by allowing you to decompose them into smaller and more manageable pieces called partitions. Once partitions are defined, SQL statements can access and manipulate the partitions rather than entire tables or indexes.
Partitioning Methods
There are three basic methods available
22
9/26/2002
Range Partitioning
Data can be divided on the basis of ranges of column values. Eg: PS_LEDGER by FISCAL_YEAR PS_GP_RSLT_ACUM by EMPLID
CREATE TABLE PS_GP_RSLT_ACUM (EMPLID, CAL_RUNID, .......) STORAGE (INITIAL 500M NEXT 500M) PARTITION BY RANGE (EMPLID) (PARTITION GPACUM1 VALUES LESS THAN (GP0101) TABLESPACE PSTABLE, PARTITION GPACUM2 VALUES LESS THAN (GP0201) TABLESPACE PSTABLE, .... ..... PARTITION GPACUM8 VALUES LESS THAN (GP0801) TABLESPACE PSTABLE)
Hash Partitioning
Data will be distributed evenly through the hashing function. It will be useful for the table where there is not an appropriate range to be used.
Composite Partitioning
It is a combination of range and hash partitioning. It uses range partitioning to distribute the data and divides the data into sub-partitions within each range using hash partitioning.
Partitioned Indexes
In addition to table partitioning, indexes on partitioned tables can also be partitioned. Oracle supports two types of index partitioning.
LOCAL Index
A local index is equipartitioned with its underlying table. That is, the index has the same number of partitions and partition keys as the base table. Eg: CREATE UNIQUE INDEX PS_GP_RSLT_ACUM ON PS_GP_RSLT_ACUM (EMPLID, CAL_RUN_ID, ....) STORAGE (INITIAL 500M NEXT 500M ) LOCAL(PARTITION GPACUM1 TABLESPACE PSINDEX, PARTITION GPACUM2 TABLESPACE PSINDEX, .....,
Copyright PeopleSoft Corporation 2001. All rights reserved.
23
GLOBAL Index
A global index may or may not be partitioned. If it is partitioned, it should not be equipartitioned with the base table.
Advantages of Partitioning
Partitioning improves the availability and manageability of large tables and helps DBAs to perform administrative tasks on a partition without effecting other partitioning. It also helps the SQL statements to deal with a fewer number of rows scanned and improve performance. When running PeopleSoft batch processes in parallel, you can reduce I/O contention by isolating each job stream in its own partition on large, high-volume transaction tables and carefully managing the placement of the partitioned datafiles. You are also likely to see huge performance gains on queries that perform full table scans. When the table involved is properly partitioned, the query will only need to perform a full scan on a single partition rather than the entire table.
Thumb Rule Online Batch Have many small rollback segments Have few large rollback segments
The preceding rule while very valid, may not be practical for the DBA to implement in the environment, where online and batch happens at the same time. One may create many small rollback segments and few large rollback segments in the database, and a specific large rollback segment can be allocated using the "SET TRANSACTION USE ROLLBACK SEGMENT RBSLARGE" for a batch process. Practical problem could be to truly dedicate the large rollback segment to the batch process only. Other online transactions may also use the large segment. Only way to dedicate the large segments to batch process is to run the process when no online transactions are running. So, a DBA should make a fair assessment of the requirement to run the batch and online processes simultaneously and size the Rollback Segments accordingly. The following are a few generic guidelines:
Online
If the batch processes are not run when the online transactions are running, then the following setup may be useful. Example:
Copyright PeopleSoft Corporation 2001. All rights reserved.
24
9/26/2002 RB01 RB02 RB03 RB04 RB05 RB06 RBL1 RBL2 - Online - Online - Online - Online - Online - Online - Offline - Offline
RB01 - RB06 are smaller rollback segments. RBL1 - RBL2 are larger rollback segments. If the online transactions are run along with batch processes, then the following setup may be useful. Example: RB01 RB02 RB03 RB04 RB05 RB06 RB07 RB08 - Online - Online - Online - Online - Online - Online - Online - Online
RB01 - RB08 are medium sizes rollback segments to support both online and batch processes.
Batch
If the batch process can be run when no online transactions are running, then the dedicating the large rollback segment to the process will help; but, may not be practical when the multiple jobs of the same process is run. The better option in such cases is to bring the required large rollback segments online and make other small rollback segments offline before running the batch processes. Following examples will give some guidelines to specify the large rollback segment to the process.
SQR/COBOL
If the batch process is of SQR or COBOL then the program can be changed to add the following command at the beginning of the process. "SET TRANSACTION USE ROLLBACK SEGMENT RBLARGE;"
Example: The following code bit should be called in the beginning of an SQR or
Copyright PeopleSoft Corporation 2001. All rights reserved.
25
9/26/2002 after a transaction COMMIT or ROLLBACK. ! -------------------! - BEGIN CODE BIT ! -------------------begin-procedure get-large-rollback begin-sql SET TRANSACTION USE ROLLBACK SEGMENT RBS_LARGE end-sql end-procedure get-large-rollback ! -------------------! - END CODE BIT ! --------------------
Application Engine
If the batch program is written in Application Engine, then the specific rollback segment can be allocated by adding a step at the beginning of the process with PeopleCode action. Specify the following code line to achieve that. %SQLEXEC("SET TRANSACTION USE ROLLBACK SEGMENT RBLARGE;");
The number of hard parses can be identified in a PeopleSoft Application Engine trace (128). In Oracle Trace output such statements are shown as individual statements and each statement parses once. It is somewhat difficult to identify the SQL that are hard parsed due to literal instead of bind variables.
26
9/26/2002 Most of the PeopleSoft programs written in Application Engine, SQR, and COBOL have taken care to address this issue. In some situations there are some steps in AE process that are not using bind variables. This happens when certain kind of statements cannot handle bind variables in some platforms. As Oracle deals with bind variables efficiently, such statements can typically be made to use bind variables. The following section gives some guidelines to follow to use the bind variables.
AE Trace -- 16.46.00 ......(PC_PRICING.BL6100.10000001) (SQL) UPDATE PS_PC_RATE_RUN_TAO SET RESOURCE_ID = 10000498 WHERE PROCESS_INSTANCE = 419 AND BUSINESS_UNIT = 'US004' AND PROJECT_ID = 'PRICINGA1' AND ACTIVITY_ID = 'ACTIVITYA1' AND RESOURCE_ID = 'VUS004VA10114050' AND LINE_NO = 1 / -- Row(s) affected: 1
27
9/26/2002 ******************************************************************************** UPDATE PS_PC_RATE_RUN_TAO SET RESOURCE_ID = 10000561 WHERE PROCESS_INSTANCE = 419 AND BUSINESS_UNIT = 'US004' AND PROJECT_ID = 'PRICINGA1021' AND ACTIVITY_ID = 'ACTIVITYA2042' AND RESOURCE_ID = 'VUS004VA10210124050' AND LINE_NO = 1
cpu elapsed disk query current -------- ---------- ---------- ---------- ---------0.00 0.00 0 0 0 0.01 0.01 0 2 5 0.00 0.00 0 0 0 -------- ---------- ---------- ---------- ---------0.01 0.01 0 2 5
Misses in library cache during parse: 1 Optimizer goal: CHOOSE Parsing user id: 21 (PROJ84) Rows ------1 2 Row Source Operation --------------------------------------------------UPDATE PS_PC_RATE_RUN_TAO INDEX RANGE SCAN (object id 16735)
Rows ------0 1 2
Execution Plan --------------------------------------------------UPDATE STATEMENT GOAL: CHOOSE UPDATE OF 'PS_PC_RATE_RUN_TAO' INDEX GOAL: ANALYZED (RANGE SCAN) OF 'PS_PC_RATE_RUN_TAO' (UNIQUE) ******************************************************************************** Statement with Re-Use flag:
AE Trace -- 16.57.57 ......(PC_PRICING.BL6100.10000001) (SQL) UPDATE PS_PC_RATE_RUN_TAO SET RESOURCE_ID = :1 WHERE PROCESS_INSTANCE = 420
Copyright PeopleSoft Corporation 2001. All rights reserved.
28
9/26/2002 AND BUSINESS_UNIT = :2 AND PROJECT_ID = :3 AND ACTIVITY_ID = :4 AND RESOURCE_ID = :5 AND LINE_NO = :6 / -- Bind variables: -1) 10000751 -2) US004 -3) PRICINGA1 -4) ACTIVITYA1 -5) VUS004VA10114050 -6) 1 -- Row(s) affected: 1
Oracle Trace Output ******************************************************************************** UPDATE PS_PC_RATE_RUN_TAO SET RESOURCE_ID = :1 WHERE PROCESS_INSTANCE = 420 AND BUSINESS_UNIT = :2 AND PROJECT_ID = :3 AND ACTIVITY_ID = :4 AND RESOURCE_ID = :5 AND LINE_NO = :6
call count ------- -----Parse 1 Execute 252 Fetch 0 ------- -----total 253
cpu elapsed disk query current -------- ---------- ---------- ---------- ---------0.00 0.00 0 0 0 0.22 0.22 0 509 1284 0.00 0.00 0 0 0 -------- ---------- ---------- ---------- ---------0.22 0.22 0 509 1284
Misses in library cache during parse: 1 Optimizer goal: CHOOSE Parsing user id: 21 (PROJ84) Rows ------252 504 Row Source Operation --------------------------------------------------UPDATE PS_PC_RATE_RUN_TAO INDEX RANGE SCAN (object id 16735)
Execution Plan --------------------------------------------------UPDATE STATEMENT GOAL: CHOOSE UPDATE OF 'PS_PC_RATE_RUN_TAO' INDEX GOAL: ANALYZED (RANGE SCAN) OF 'PS_PC_RATE_RUN_TAO' (UNIQUE)
********************************************************************************
29
9/26/2002
SQR/COBOL -- CURSOR_SHARING
Most of the SQR and COBOL programs are written to use bind variables. If you find any programs that are not using bind variables and not able to modify the code, then the CURSOR_SHARING option is a way to go. Oracle introduced this new parameter CURSOR_SHARING as of Oracle8i. By default its value is set to EXACT. That means, the database looks for an exact match of the SQL statement while parsing. Another value that can be set for this parameter is FORCE. With this setting the database looks for a similar statement excluding the values that are passed to the SQL statement. It replaces the values with the system bind variables and treats them as single statement and parses once. How to Set the CURSOR_SHARING Values The parameter can be set at the instance level or at the session level. Instance Level: Set the following parameter in the init<dbname>.ora file and restart the database. CURSOR_SHARING = FORCE Session Level: Following syntax can be used to set the value at the session level. ALTER SESSION SET CURSOR_SHARING = 'FORCE'; Setting value at the instance level will force to use the bind variables for every statement that run in the database instance. It may give improvement due to reduced parsing, but may not be required if the application programs are written to handle the bind values. At the same time, there could be performance impacts in other programs because of this value as the histograms are no more useful. Setting the value at the session level is more appropriate. If you identify the program (SQR/COBOL) that is not using the bind variables and need to force them to use the binds at the database level, then adding the ALTER SESSION command at the beginning of the program should be a better option. If you are not willing to change the application program then implementing the session level command through a trigger will give you more flexibility. Session Level (using trigger): Following sample trigger code can be used to implement the session-level option. CREATE OR REPLACE TRIGGER MYDB.SET_TRACE_INS6000 BEFORE UPDATE OF RUNSTATUS ON MYDB.PSPRCSRQST FOR EACH ROW WHEN (NEW.RUNSTATUS = 7 AND OLD.RUNSTATUS != 7 AND NEW.PRCSTYPE = 'SQR REPORT' AND NEW.PRCSNAME = 'INS6000' ) BEGIN EXECUTE IMMEDIATE 'ALTER SESSION SET CURSOR_SHARING=FORCE'; END; / Note: Make sure to give ALTER SESSION privilege to MYDB to make this trigger work. Example: Sql Statement issued from SQR/COBOL program: SELECT . FROM PS_PHYSICAL_INV PI, PS_STOR_LOC_INV SLI WHERE.
Copyright PeopleSoft Corporation 2001. All rights reserved.
30
9/26/2002 NOT EXISTS (SELECT 'X' FROM PS_PICKZON_INV_VW PZI WHERE PZI.BUSINESS_UNIT = 'US008' AND PZI.INV_ITEM_ID = 'PI000021' AND ..) ORDER BY .. The above statement uses a literal values in the where clause thereby causing a hard parse for each execute. Every hard parse has some amount of performance overhead. Minimizing them will boost the performance. This statement gets executed for every combination of BUSINESS_UNIT and INV_ITEM_ID. Per the data composition used in this benchmark there were about 13,035 unique combinations of BUSINESS_UNIT and INV_ITEM_ID and about 19,580 total executes. Oracle TKPROF Output with CURSOR_SHARING=FORCE SELECT FROM PS_PHYSICAL_INV PI, PS_STOR_LOC_INV SLI WHERE .. NOT EXISTS (SELECT :SYS_B_09 FROM PS_PICKZON_INV_VW PZI WHERE PZI.BUSINESS_UNIT = :SYS_B_10 AND PZI.INV_ITEM_ID = :SYS_B_11 AND ..) ORDER BY .. Pros and Cons of CURSOR_SHARING
By setting the above parameter at the database level the overall processing time reduced significantly. Overall statistics with no bind variables:
OVERALL TOTALS FOR ALL NON-RECURSIVE STATEMENTS call ------Parse Execute Fetch ------total count -----26389 404647 517618 -----948654 cpu elapsed disk query current -------- ---------- ---------- ---------- ---------98.27 99.54 0 1074 0 51.09 50.11 1757 242929 371000 47.85 47.43 3027 1455101 235446 -------- ---------- ---------- ---------- ---------197.21 197.08 4784 1699104 606446 rows ---------0 78376 189454 ---------267830
Misses in library cache during parse: 13190 Misses in library cache during execute: 1
OVERALL TOTALS FOR ALL RECURSIVE STATEMENTS call count ------- -----Parse 27118 Execute 33788 Fetch 54988 ------- -----total 115894 cpu elapsed disk query current -------- ---------- ---------- ---------- ---------5.35 5.06 0 49 1 2.42 2.22 0 5577 235 2.44 2.57 1 97241 0 -------- ---------- ---------- ---------- ---------10.21 9.85 1 102867 236 rows ---------0 229 47621 ---------47850
31
9/26/2002
Parse Execute Fetch ------total 26389 404647 517618 -----948654 15.44 15.69 0 0 0 44.02 43.51 173 231362 333538 45.47 43.02 2784 1439571 235104 -------- ---------- ---------- ---------- ---------104.93 102.22 2957 1670933 568642 0 78376 189454 ---------267830
Misses in library cache during parse: 64 Misses in library cache during execute: 1
OVERALL TOTALS FOR ALL RECURSIVE STATEMENTS call count ------- -----Parse 356 Execute 357 Fetch 667 ------- -----total 1380 cpu elapsed disk query current -------- ---------- ---------- ---------- ---------0.08 0.10 0 0 0 0.47 0.48 0 5568 228 0.00 0.01 0 1333 0 -------- ---------- ---------- ---------- ---------0.55 0.59 0 6901 228 rows ---------0 228 552 ---------780
From the above trace statistics, it can be seen that the number of library cache misses decreased with the use of bind variables. Original Timing 197 Sec Time with CURSOR_SHARING option 102 Sec %Gain
48%
Parameter: SESSION_CACHED_CURSOR
For the processes that use bind variables, but do it with a cursor open/close or a (soft) parse for a SQL statement, the Oracle parameter SESSION_CACHED CURSOR will give some scalability improvements. This will be mainly useful for the repeating statements issued through PeopleCode using the SQLExec command. The parameter SESSION_CACHED_CURSORS is a numeric parameter, which can be set at instance level or at session level using the command: ALTER SESSION SET session_cached_cursors = NN; The value NN determines how many 'cached' cursors there can be in your session. To get placed in the session cache the same statement has to be parsed 3 times within the same cursor. A pointer to the shared cursor is then added to your session cache. If all session cache cursors are in use then the least recently used entry is discarded. Depending on the available memory, the value between 10 and 50 can give show some performance gains.
HISTOGRAMS
What Are Histograms?
Cost-based optimization uses data-value histograms to get accurate estimates of the distribution of column data. A histogram partitions the values in the column into bands, so that all column values in a band fall within the same
Copyright PeopleSoft Corporation 2001. All rights reserved.
32
9/26/2002 range. Histograms provide improved selectivity estimates in the presence of data skew, resulting in optimal execution plans with non-uniform data distributions. Oracle uses height-balanced histograms (as opposed to width-balanced). Width-balanced histograms divide the data into a fixed number of equal-width ranges and then count the number of values falling into each range. Height-balanced histograms place approximately the same number of values into each range so that the endpoints of the range are determined by how many values are in that range.
Columns such as PROCESS_INSTANCE, ORD_STATUS are likely candidate that gets benefited from histograms.
sample: Sample query that used histogram statistics to boost the performance
Problem Statement:
We observed that the trace files were showing full table scans for most of the queries involving the tables PS_BI_HDR, PS_BI_LINE, and PS_BI_LINE_DST. Queries with full table scans on big tables are almost always a relatively costly process. The following is the sample of a SQL statement that we found to be inefficient due to a FULL Table Scan on PS_BI_LINE, a large-volume key table.
********************************************************************************
UPDATE PS_BI_LINE SET CURRENCY_CD_XEU = 'EUR', . WHERE INVOICE IN (SELECT DISTINCT INVOICE FROM PS_BI_CURRCONV_TMP WHERE PROCESS_INSTANCE = 3698 AND INVOICE = PS_BI_LINE.INVOICE
Copyright PeopleSoft Corporation 2001. All rights reserved.
33
9/26/2002 AND BUSINESS_UNIT = PS_BI_LINE.BUSINESS_UNIT AND PROCESS_FLG = 'S') AND BUSINESS_UNIT = 'FCUSA' AND PROCESS_INSTANCE = 3698
call count ------- -----Parse 1 Execute 1 Fetch 0 ------- -----total 2 cpu elapsed disk query current -------- ---------- ---------- ---------- ---------0.01 0.01 0 0 0 303.75 667.66 739444 1630166 340095 0.00 0.00 0 0 0 -------- ---------- ---------- ---------- ---------303.76 667.67 739444 1630166 340095 rows ---------0 300000 0 ---------300000
Misses in library cache during parse: 1 Optimizer goal: CHOOSE Parsing user id: 18 (FSTNAL) Rows ------0 0 300000 6000000 300000 300000 Execution Plan --------------------------------------------------UPDATE STATEMENT GOAL: CHOOSE UPDATE OF 'PS_BI_LINE' FILTER TABLE ACCESS GOAL: ANALYZED (FULL) OF 'PS_BI_LINE' TABLE ACCESS GOAL: ANALYZED (BY INDEX ROWID) OF 'PS_BI_CURRCONV_TMP' INDEX (RANGE SCAN) OF 'PSABI_CURRCONV_TMP' (UNIQUE)
********************************************************************************
Recommendation:
This particular SQL statement had to process 100,000 invoices. There were 600,000 rows that qualified to be updated in the PS_BI_LINE table. Using an index to access the table would definitely help the performance of the SQL statement. The existing index, PSDBI_LINE is a good candidate to use. It has the columns as follows: PROCESS_INSTANCE, BUSINESS_UNIT, INVOICE. Since the index has the PROCESS_INSTANCE as its leading column, it is safe to assume that the index was created for the batch performance. In Oracle Rule Base Optimization, the index will be favored to access the table. Unfortunately, it is not readily the case in the Cost Base optimization. The Cost Based optimizer would favor the full table scan, which, in this case, is not intended. A full table scan will still is chosen by the optimizer even if the usual ANALYZE is run against the index. This is due to the fact that the ANALYZE command would make the assumption that the distinct values in the PROCESS_INSTANCE have equal statistical weights. For example, if there were no BICURCNV process executing, the value of the PROCESS_INSTANCE in each row of PS_BI_LINE table is zero. If a BICURCNV process is run, there will be two distinct values in the PROCESS_INSTANCE column. They are zero, for the majority of the rows in the table and an assigned process instance number for those rows that will be processed by BICURCNV. Then, if the usual ANALYZE command is run, the database will assume that 50 percent of the rows in the table contain the number zero and the other 50 percent has the assigned process instance number. Unfortunately, this is a gross assumption for the database to make. Since it would be the case, the Cost-Based optimizer will favor the use of FullTable Scan instead of the Index Scan on PSDBI_LINE. In order to correct this discrepancy we added the FOR COLUMNS option in the ANALYZE command. In effect, we built the data distribution information or histogram about the PROCESS_INSTANCE column. As a result, the Cost-Based optimizer was able to make a more informed decision to use the PSDBI_LINE index.
34
9/26/2002 In order to take advantage of these histograms, be sure to create on column PROCESS_INSTANCE of all the tables with high volume. The following execution plan shows the improved access path and timings.
********************************************************************************
UPDATE PS_BI_LINE SET CURRENCY_CD_XEU = 'EUR', . WHERE INVOICE IN (SELECT INVOICE FROM PS_BI_CURRCONV_TMP WHERE PROCESS_INSTANCE = 3694 AND INVOICE = PS_BI_LINE.INVOICE AND BUSINESS_UNIT = PS_BI_LINE.BUSINESS_UNIT AND PROCESS_FLG = 'S') AND BUSINESS_UNIT = 'FCUSA' AND PROCESS_INSTANCE = 3694
call count ------- -----Parse 1 Execute 1 Fetch 0 ------- -----total 2 cpu elapsed disk query current -------- ---------- ---------- ---------- ---------0.02 0.02 0 0 0 121.28 238.28 42701 203395 340093 0.00 0.00 0 0 0 -------- ---------- ---------- ---------- ---------121.30 238.30 42701 203395 340093 rows ---------0 300000 0 ---------300000
Misses in library cache during parse: 1 Optimizer goal: CHOOSE Parsing user id: 18 (FSTNAL) Rows ------0 0 300001 100000 Execution Plan --------------------------------------------------UPDATE STATEMENT GOAL: CHOOSE UPDATE OF 'PS_BI_LINE' INDEX GOAL: ANALYZED (RANGE SCAN) OF 'PSDBI_LINE' (NON-UNIQUE) INDEX (RANGE SCAN) OF 'PSABI_CURRCONV_TMP' (UNIQUE)
********************************************************************************
Please note that the access path shown above was a result of incorporating the literal value of the PROCESS_INSTANCE. If the Re-Use is checked, the value of the %BIND(PROCESS_INSTANCE) will be in the form of a bind variable. Having the bind variable for the PROCESS_INSTANCE column will not produce the execution plan that favors the use of the PSDBI_LINE index. In order to make the AE program to pass a resolved literal value for the PROCESS_INSTANCE column even when the re-use flag is checked, the statement should be written as follows. WHERE PROCESS_INSTANCE = %ProcessInstance OR WHERE PROCESS_INSTANCE = &BIND(PROCESS_INSTANCE, STATIC) The additional parameter, STATIC will resolve the literal value of the PROCESS_INSTANCE before sending the query to the database. Same thing can be achieved with %ProcessInstance also. For additional information on this parameter, please refer to PeopleTools documentation on &BIND, %ProcessInstance.
Result:
By creating the histograms on the PROCESS_INSTANCE for the PS_BI_LINE table, the SQL statement showed good performance improvement. Without histogram in seconds With histogram in seconds %Gain
35
Creating Histograms
Create histograms on columns that are frequently used in WHERE clauses of queries and that have highly skewed data distributions. To do this, use the GATHER_TABLE_STATS procedure of the DBMS_STATS package. For example, to create a 10-bucket histogram on the SAL column of the EMP table, issue this statement: EXECUTE DBMS_STATS.GATHER_TABLE_STATS ('scott','emp', METHOD_OPT => 'FOR COLUMNS SIZE 10 sal'); The SIZE keyword declares the maximum number of buckets for the histogram. You would create a histogram on the SAL column if there were an unusually high number of employees with the same salary and few employees with other salaries. You can also collect histograms for a single partition of a table. Column statistics appear in the data dictionary views: USER_TAB_COLUMNS, ALL_TAB_COLUMNS, and DBA_TAB_COLUMNS. Histograms appear in the data dictionary views USER_HISTOGRAMS, DBA_HISTOGRAMS, and ALL_HISTOGRAMS.
Viewing Histograms
You can find information about existing histograms in the database using these data dictionary views: USER_HISTOGRAMS ALL_HISTOGRAMS DBA_HISTOGRAMS
Find the number of buckets in each column's histogram in: USER_TAB_COLUMNS ALL_TAB_COLUMNS DBA_TAB_COLUMNS
36
9/26/2002
FAQ on Histograms
1. What are the steps necessary in creating the histogram for the PROCESS_INSTANCE column of the PS_BI_LINE table? Run the ANALYZE commands in the following order: ANALYZE TABLE PS_BI_LINE ESTIMATE STATISTICS ANALYZE TABLE PS_BI_LINE ESTIMATE STATISTICS FOR COLUMNS PROCESS_INSTANCE 2. How should I create histograms if the table statistics already exist? Run the ANALYZE command as follow:
ANALYZE TABLE PS_BI_LINE ESTIMATE STATISTICS FOR COLUMNS PROCESS_INSTANCE 3. Can histograms exist without having table statistics? Yes, but it will not be effective without having statistics on the underlying table. 4. How do I delete histograms and keep the table statistics in place? Run the ANALYZE command as follow: ANALYZE TABLE PS_BI_LINE ESTIMATE STATISTICS 5. How do I delete the statistics on an entire table including histograms? First of all, unless you have compelling reasons to delete the statistics, do not run the ANALYZE command below. ANALYZE TABLE PS_BI_LINE DELETE STATISTICS 6. What happens if the table statistics are run after creating histograms? Analyzing the table after creating histograms would erase all the previously created histograms and just create the table statistics. 7. How often should I run the histogram?
37
9/26/2002 To maintain histogram information on a specific column like PROCESS_INSTANCE, the ANALYZE FOR COLUMNS command must be run as often as the ANALYZE <table> is being run. See FAQ #1 for details. 8. What is the overhead of running histograms? The overhead incurs when running the histogram is very much the same as the overhead when running the typical ANALYZE command for a table. As a rule of thumb, any ANALYZE command should be run during the maintenance window. 9. What is a good source to learn more about the Oracle histogram? For more information on the subject, the Oracle Tuning Manual provides the details on histogram.
Scenario 1
38
9/26/2002
SERVER1
TCP/IP
Oracle DB
Running the Process Scheduler on the Application server will uses TCP/IP connection to connect to the database. As the batch process may involve extensive SQL processing, this TCP/IP can be a big overhead and may impact processing times. Impact is more evident in a process where excessive row-by-row processing is done. For the processes where majority of SQL statements are of set based, the impact due to TCP/IP overhead may not be that big. Have a dedicated network connection between the batch server and the database to minimize the overhead.
Running the Process Scheduler on the database server will eliminate the TCP/IP overhead and improve the processing time. At the same time it does use the additional server memory.
39
9/26/2002 Set the following value in the process scheduler configuration file "psprcs.cfg" to use the direct connection instead of TCP/IP UseLocalOracleDB=1 This kind of setup is useful for the programs that do excessive row-by-row processing.
40
9/26/2002
ONLINE TRACE
psappsrv.cfg ;------------------------------------------------------------------------; SQL Tracing Bitfield ; ; Bit Type of tracing ; ----------------; 1 - SQL statements ; 2 - SQL statement variables ; 4 - SQL connect, disconnect, commit and rollback ; 8 - Row Fetch (indicates that it occurred, not data) ; 16 - All other API calls except ssb
41
9/26/2002 ; 32 - Set Select Buffers (identifies the attributes of columns ; to be selected). ; 64 - Database API specific calls ; 128 - COBOL statement timings ; 256 - Sybase Bind information ; 512 - Sybase Fetch information ; 4096 - Manager information ; 8192 - Mapcore information ; Dynamic change allowed for TraceSql and TraceSqlMask TraceSql=0 TraceSqlMask=12319 ;------------------------------------------------------------------------; PeopleCode Tracing Bitfield ; ; Bit Type of tracing ; ----------------; 1 - Trace entire program ; 2 - List the program ; 4 - Show assignments to variables ; 8 - Show fetched values ; 16 - Show stack ; 64 - Trace start of programs ; 128 - Trace external function calls ; 256 - Trace internal function calls ; 512 - Show parameter values ; 1024 - Show function return value ; 2048 - Trace each statement in program ; Dynamic change allowed for TracePC and TracePCMask TracePC=0 TracePCMask=0
ORACLE TRACE
The following are the trace settings to capture the Oracle Trace
42
9/26/2002 Session Level (using trigger): CREATE OR REPLACE TRIGGER MYDB.SET_TRACE_INS6000 BEFORE UPDATE OF RUNSTATUS ON MYDB.PSPRCSRQST FOR EACH ROW WHEN (NEW.RUNSTATUS = 7 AND OLD.RUNSTATUS != 7 AND NEW.PRCSTYPE = 'SQR REPORT' AND NEW.PRCSNAME = 'INS6000' ) BEGIN EXECUTE IMMEDIATE 'ALTER SESSION SET SQL_TRACE=TRUE'; END; /
TKPROF
TKPROF Capture the Oracle trace and run the TKPROF with following sort options.
STATSPACK
What Is STATSPACK?
Tuning a database is not all that easy. It can take multiple iterations to get to the stable environment. Oracle has provided a tool called STATSPACK to gather the database information for a given period and give a report on database health. Statspack is an useful tool provided by Oracle for reactive tuning.
43
9/26/2002 Statspack differs fundamentally from the well-known BSTAT/ESTAT tuning scripts in that it collects more information and stores the performance-statistics data permanently in Oracle tables, which can be used for later reporting and analysis. STATSPACK is a set of SQL scripts, PL/SQL stored procedures and packages for collecting performance statistics. It's available starting from Oracle 8.1.6. It provides more information than UTLBSTAT/UTLESTAT utilities, plus it automates some operations.
3.
Collect statistics
1. Run SQL*Plus and connect as perfstat (default password is perfstat): connect perfstat/perfstat
2.
Each time the above command is issued the database information is recorded along with the time. So, it is required to issue this command twice, before the start of the process and after the completion of the process in order to capture the information between the two snaps.
Generate Report
1. Run SQL*Plus and connect as perfstat (default password is perfstat): connect / as perfstat To generate a report, run the following script: 44
2.
9/26/2002 On 8.1.6 on Unix: @?/rdbms/admin/statsrep On 8.1.6 on NT: @%ORACLE_HOME%\rdbms\admin\statsrep On 8.1.7 and 9I on Unix: @?/rdbms/admin/spreport On 8.1.7 and 9I on NT: @%ORACLE_HOME%\rdbms\admin\spreport
You need to specify the start and end snap IDs to get the report.
Uninstall
1. Run SQL*Plus and connect as SYSDBA: connect / as sysdba To uninstall STATSPACK run the following script: On 8.1.6 on Unix: @?/rdbms/admin/statsdrp On 8.1.6 on NT: @%ORACLE_HOME%\rdbms\admin\statsdrp On 8.1.7 and 9I on Unix: @?/rdbms/admin/spdrop On 8.1.7 and 9I on NT: @%ORACLE_HOME%\rdbms\admin\spdrop
2.
2.
45
9/26/2002
RECOMMENDATIONS
Block Size
Thorough analysis should be done before choosing an appropriate block size at the time of database creation. There could be significant performance impact depending on the size selected. When you're creating an Oracle database, you can go back and change just about any parameter EXCEPT your DB_BLOCK_SIZE. The only way to change this is to delete everything and start over. Because of the importance of this parameter, you should choose one that best suits your needs before you start.
Size Considerations
Small Block Size (2K to 8K) Pros:
1) Reduces block contention. 2) Is good for small number of rows. 3) Is good for random access.
Cons:
1) Has relatively large overhead. 2) Has small number of rows per block. 3) Can cause more index blocks to be read.
Larger Block Size (16K) Pros:
1) Less overhead.
Copyright PeopleSoft Corporation 2001. All rights reserved.
46
9/26/2002
2) Good for sequential access. 3) Good for very large rows. 4) Better performance of index reads.
Cons:
47
9/26/2002 Free buffer waits occurs after a server cannot find a free buffer or when the dirty queue is full. Keep in mind that these statistics and events could also indicate that the DBWn process needs tuning.
LRU Latch
Determine the get percentage for the LRU latch: select name, sleeps/gets LRU Hit% from v$latch where name = cache buffers lru chain; If the hit percentage for the LRU latch is less than 99%, increase the number of LRU latches by setting the parameter DB_BLOCK_LRU_LATCHES. Remember, the maximum number of latches is the lower of the number of CPUs x 2 x 3 and number of buffers/50.
Log Buffer
There should be no log buffer space waits. select sid, event, seconds_in_wait, state from v$session_wait where event = log buffer space; If some time was spent waiting for space in the redo log buffer, consider increasing LOG_BUFFER, or moving the log files to faster disks such as striped disks. The redo buffer allocation retries value should be near 0; the number should be less than 1% of redo entries. select name, value from v$sysstat where name in (redo buffer allocation retries, redo entries); If necessary, increase LOG_BUFFER (until the ratio is stable) or improve the checkpointing or archiving process. Keep in mind that a modest increase can significantly enhance throughput, and the LOG_BUFFER size must be a multiple of the operating system block size.
Tablespace I/O
Reserve the SYSTEM tablespace for data dictionary objects. Create locally managed tablespaces to avoid space management issues. Split tables and indexes into separate tablespaces. Create separate rollback tablespaces. Store very large database objects in their own tablespace. Create one or more temporary tablespaces.
48
9/26/2002 Specify DB_FILE_MULTIBLOCK_READ_COUNT (8 is default). Monitor long-running full table scans with v$session_longops view. select sid, serial#, opname, to_char(start_time, HH24:MI:SS) as START, (sofar/totalwork)*100 as PERCENT_COMPLETE from v$session_longops; select name, value from v$sysstat where name like %table scans%;
Checkpoints
Size the online redo log files to cut down the number of checkpoints. Add online redo log groups to increase the time before LGWR starts to overwrite. Regulate checkpoints with the initialization parameters: FAST_START_IO_TARGET, LOG_CHECKPOINT_INTERVAL, LOG_CHECKPOINT_TIMEOUT, DB_BLOCK_MAX_DIRTY_TARGET, LOG_CHECKPOINTS_TO_ALERT
PCTFREE/PCTUSED
PCTFREE: 1) Default 10. 2) Zero if no update activity. 3) PCTFREE = 100 x upd/(upd + ins). PCTUSED: 1) Default 40. 2) Set if rows deleted. 3) PCTUSED = 100 PCTFREE 100 x rows x (ins + upd)/blocksize. Note: upd is the average amount added by updates, in bytes; ins is the average initial row length at insert; rows is the number of rows to be deleted before free list maintenance occurs. Watch out for migration and chaining! analyze table sales.order_hist compute statistics; select num_rows, chain_cnt from dba_tables where table_name = ORDER_HIST; analyze table sales.order_hist list chained rows;
49
9/26/2002 select owner_name, table_name, head_rowid from chained_rows where table_name = ORDER_HIST; (For Oracle 8i, use alter table move, instead of the technique utilizing the previous two commands.)
Rebuilding Indexes
analyze index acct_no_idx validate structure; select (del_lf_rows_len/lf_rows_len) * 100 as index_usage from index_stats; (index_usage represents the percentage of rows deleted. If > 10%, consider rebuilding.) alter index acct_no_idx rebuild;
Sorting
Set SORT_AREA_SIZE and SORT_MULTIBLOCK_READ_COUNT (forces the sort to read a larger section of each run into memory during a merge pass) appropriately. 2 to 3 Megs for SORT_AREA_SIZE for a data warehouse is not implausible. Avoid sort operations whenever possible. Reduce swapping and paging by ensuring that sorting is done in memory where possible. Reduce space allocation calls: allocate temporary space appropriately. select disk.value Disk, mem.value Mem, (disk.value/mem.value) * 100 Ratio from v$sysstat mem, v$sysstat disk where mem.name = sorts (memory) and disk.name = sorts (disk); The Ratio of disk sorts to memory sorts should be less than 5%. Adjust SORT_AREA_SIZE if necessary. select tablespace_name, current_users, total_extents, used_extents, extent_hits, max_used_blocks, max_sort_blocks from v$sort_segment;
50
9/26/2002
51
9/26/2002
CUSTOMER VALIDATION
PeopleSoft is working with PeopleSoft customers to get feedback and validation on this document. Lessons learned from these customer experiences will be posted here.
FIELD VALIDATION
PeopleSoft Consulting has provided feedback and validation on this document. Additional lessons learned from field experience will be posted here.
52
9/26/2002
Appendix C - References
1. 2. 3. 4. 5. 6. 7. 8. 9. PeopleSoft Installation Guide - Oracle Tuning chapter http://technet.Oracle.com http://www.Oracle.com/oramag/ http://metalink.Oracle.com http://www.ixora.com.au http://www.dbasupport.com http://www.dba-village.com http://www.lazydba.com http://www.orafaq.com
10. http://www.Oracletuning.com
53
9/26/2002
Reviewers
The following people reviewed this Red Paper: Jerry Zarate - PeopleTools John Whitehead - Performance & Benchmarks Vadali Subrahmanyeswar - Performance & Benchmarks
54
Revision History
1. 2. 07/17/02: Created document.
55