Sunteți pe pagina 1din 19

ORACLE DATABASE PERFORMANCE MONITORING & TUNING Everest Project PT3 Database Performance Analysis and Recommendations

Author: Creation Date: Last Updated: Version:

Mercy Wanjiru February 21, 2002 March 05, 2002 1.0

Contents

Introduction...............................................................................................................................................................1 Scope........................................................................................................................................................................ 2 Database Changes.....................................................................................................................................................3 Top 5 Wait Events....................................................................................................................................................4 Latch Free .........................................................................................................................5 Cache Buffer Chains.....................................................................................................................................5 Cache Buffer Handles...................................................................................................................................8 Library Cache...............................................................................................................................................8 Shared Pool...................................................................................................................................................9 Row Cache Objects.......................................................................................................................................9 Wait Events............................................................................................................................................................ 10 Buffer Busy Waits..........................................................................................................................................10 Db File Sequential Reads................................................................................................................................12 Db File Scattered Read...................................................................................................................................13 Enqueue..........................................................................................................................................................15 Other Performance Considerations..........................................................................................................................16 Summary................................................................................................................................................................ 17

ii

Introduction
The purpose of this document is to provide the Performance Team and Agilent DBAs with an analysis of how the database performed in PT3. It also identifies performance problem areas and recommendations of what to do to fix the performance problem. Monitoring and tuning of the database was performed using SQL scripts and the Oracle STATSPACK utility. STATSPACK Provides statistical information and store the performance statistics data permanently in Oracle tables enabling snapshot reporting. Reports were generated for the various Scenario runs and are located in the directory /home/oracle/PT3/statspack on the Superdome (eprddb1.cos.agilent.com). The naming convention for the report files is run#_DDMMYYYY_A.rep e.g., run1_18022002.rep where # - run of the day DD - Day of run MM - Month of run YYYY - Year Thresholds specified for parse calls, disk_reads, and buffer_gets are specified in the /home/oracle/PT3/statspack/appstats.sql file. SQL Scripts Given that most of the statistical information is dynamic, SQL scripts are used to drill down to specific problem areas during a scenario run. For example, the following query was used to collect details of objects having buffer busy waits.
select p1, p2 from v$session_wait where wait_time=0 and event='buffer busy waits';

A SQL job was created to run regularly during a scenario run to collect P1 (file_id) and P2 (block_id) column values for sessions waiting (V$SESSION_WAIT). NOTE: This document covers performance information gathering and problem investigation. A successful run of 5000 virtual user connections to the database was achieved during the last week of PT3. Tuning is time consuming and more time is needed to implement and test changes to improve performance.

Scope
The performance testing included a mix of transactions collected from Agilent clients and executed for 5000 virtual user connections to the database using Mercury Interactive. Additionally, there were concurrent requests executing in the background. Other variations added to the work load included Configurator scripts, Firedam tests, and/or Workstream concurrent processes. The first consistent successful run of 5000 virtual users was achieved on the week of Feb 18, 2002 and the database performance analysis and recommendations of this document are based on the scenario runs for this week. Although, the required 5000 virtual user connections were reached, there was not enough time to implement and test the database performance recommendations suggested in this document.

Database Changes
At different times of the scenario runs some database changes were implemented based on statistics observed. These changes are summarized in the following table. Database Change No. Of rollback segments Increase LOG_BUFFER Increased DB_FILE_MULTIBLOCK_READ_C OUNT SHARED_POOL_SIZE SHARED_POOL_RESERVED_SIZE Table 1. Database Changes Implemented Other changes that may have contributed to performance improvement included: 1. Purging of Workflow tables. 2. Gathering schema statistics for application tables. 3. Pinning objects identified using the script ($AD_TOP/sql/ADXCKPIN.sql) Start Value 150 2097152 8 1500000000 300000000 02/18/2002 2149376 16 1500000000 300000000 02/19/2002 170 2673664 16 1700000000 400000000 02/20/02 200 2935808 16 1800000000 500000000

Top 5 Wait Events


The following top wait events listed were consistent during all the 4 runs. Latch Free Library cache pin Library Cache Load Lock Enqueue Buffer Busy Waits Db File Sequential Read Db File Scattered Read Database performance changes were implemented as indicated in the section above, causing some noticeable differences between the 02/19/2002 and 02/21/2002 statistics. Also, during the 02/21/02 Scenario run, Workstream concurrent processes were not running. 02/19/2002
Top 5 Wait Events ~~~~~~~~~~~~~~~~~ Wait % Total Event Waits Time (cs) Wt Time -------------------------------------------- ------------ ------------ ------latch free 63,957,731 472,337,090 97.21 library cache pin 272,839 8,654,966 1.78 library cache load lock 19,662 2,075,405 .43 enqueue 9,472 1,241,677 .26 buffer busy waits 40,476 1,217,326 .25 ------------------------------------------------------------Latch Activity

Pct Avg Pct Get Get Slps NoWait NoWait Latch Name Requests Miss /Miss Requests Miss ----------------------------- -------------- ------ ------ ------------ -----row cache objects 95,439,142 1.2 0.2 594,185 6.4 cache buffer handles 238,729,937 11.3 0.1 0 cache buffers chains 439,934,567 1.7 2.9 2,333,016 2.5 cache buffers lru chain 3,558,850 0.0 0.6 1,889,691 0.0 latch wait list 3,261,171 1.0 0.0 3,349,319 1.7 library cache 3,801,193,803 0.2 0.5 5,365,606 4.1 library cache load lock 1,320,744 0.1 0.1 0 shared pool 43,588,491 6.1 0.6 0

02/21/2002
Top 5 Wait Events ~~~~~~~~~~~~~~~~~ Wait % Total Event Waits Time (cs) Wt Time -------------------------------------------- ------------ ------------ ------latch free 16,254,645 133,190,070 96.91 library cache pin 136,720 1,708,794 1.24 buffer busy waits 37,166 1,680,763 1.22 enqueue 3,180 323,541 .24 db file sequential read 1,722,889 315,648 .23 ------------------------------------------------------------Latch Activity Pct Avg Pct Get Get Slps NoWait NoWait Latch Name Requests Miss /Miss Requests Miss ----------------------------- -------------- ------ ------ ------------ ------

row cache objects cache buffer handles cache buffers chains cache buffers lru chain latch wait list library cache library cache load lock shared pool

9,675,883 367,790,444 712,107,755 5,396,231 518,275 161,227,235 24,115 12,238,655

0.1 10.1 0.8 0.0 0.9 0.7 0.2 1.1

0.2 0.1 2.1 0.6 0.2 0.7 1.0 0.4

10,210 0 3,619,653 3,214,737 525,743 107,648 0 0

1.3 1.8 0.0 1.4 3.0

Also Redo Log statistics show some improvement. 02/19/02


NAME VALUE ---------------------------------------------------------------- ---------redo entries 20416523 redo buffer allocation retries 1197 redo log space requests 230

02/21/02

NAME VALUE ---------------------------------------------------------------- ---------redo entries 18483845 redo buffer allocation retries 120 redo log space requests 126

Latch Free
Latch are low level serialization mechanisms used to protect the shared data structures in the SGA. Time spent waiting for a latch is time wasted. Latches that consistently showed waits included: Cache Buffer Chains Cache Buffer Handles Library Cache Shared Pool Row cache objects

Cache Buffer Chains This latch is needed when user processes try to scan the SGA for database cache buffers. Block contention for individual blocks can show up as cache buffer chain contention. Each cache buffers chains latch covers a list of buffers in A single hash latch may cover many buffers and DBAs can you the touch count column in X$BH table to identify hot buffers. The following query was used to identify hot buffers.
select x.obj object, d.object_name,object_type,COUNT(1) buffers , AVG(tch) avg_touches ,sysdate,d.owner from x$bh x, dba_objects d where x.obj = d.object_id and lru_flag = 8 GROUP BY obj, object_name,object_type ,owner HAVING AVG(tch) > 5 AND COUNT(1) > 20; order by avg_touches;

The following table lists tables that were consistently select as been on the hot portion of the LRU list (i.e., where lru_flag = 8 - not in cold portion of lru)

Table Owner

Table Name

# of Rows

# of Blocks

* ** * ** ** ** * * *

** * *

** *

AP BOM SYS APPLSYS APPLSYS APPLSYS APPLSYS APPLSYS APPLSYS APPLSYS APPLSYS APPLSYS APPLSYS GL SYS HR HR AR AR SYS SYS INV INV INV INV INV SYS SYS PO PO PO QP QP QP QP PO PO SYS SYS SYS APPLSYS APPLSYS

AP_INVOICES_ALL BOM_INVENTORY_COMPONENTS DEPENDENCY$ FND_DESCR_FLEX_COLUMN_USAGES FND_DOC_SEQUENCE_ASSIGNMENTS FND_FLEX_VALUES_TL FND_FORM_FUNCTIONS FND_FORM_FUNCTIONS_TL FND_LOOKUP_VALUES FND_MENU_ENTRIES FND_MENU_ENTRIES_TL FND_PROFILE_OPTIONS FND_PROFILE_OPTION_VALUES GL_PERIOD_STATUSES HIST_HEAD$ HR_ALL_ORGANIZATION_UNITS_TL HR_LOCATIONS_ALL_TL HZ_CUST_SITE_USES_ALL HZ_LOCATIONS IDL_SB4$ IDL_UB2$ MTL_CATEGORIES_B MTL_CATEGORIES_TL MTL_MATERIAL_TRANSACTIONS MTL_MATERIAL_TRANSACTIONS_TEMP MTL_RESERVATIONS OBJ$ OBJAUTH$ PO_HEADERS_ALL PO_LINES_ALL PO_LINE_LOCATIONS_ALL QP_LIST_HEADERS_B QP_LIST_HEADERS_TL QP_LIST_LINES QP_PRICING_ATTRIBUTES RCV_SHIPMENT_HEADERS RCV_SHIPMENT_LINES SYN$ TRIGGER$ VIEW$ WF_ACTIVITIES WF_PROCESS_ACTIVITIES

19413 1735484.85 35408 3153.53535 1004176.77 9070.70707 180807.071 1278714.14 23874 477381 3284.84848 20260 55141.4141 11739.3939 7821.21212 263857.576 139253.535 10022.2222 200416.162 357319 2335.35354 9975 99793.9394 158243.434 75762.6263 6652.52525 133145.455 3208010.1 9501381 2936.36364 2556.56566

790 60430 1015 50 9250 125 2190 20110 180 5405 135 220 800 115 110 6450 5189 185 2430 12350 380 230 3930 4339 3035 120 1800 106305 225235 65 100

9347.47475 16161.6162

690 1075

Table 2. Tables with hot blocks * means tables had high touch counts on more than 1 scenario runs ** means tables had high touch counts on every scenario run Recommendation: Use multiple buffer pools: "keep" cache if there are small, frequently accessed tables that require quick response time

"recycle" cache for very large tables with random I/O If contention is for a few hot block, it may be a poorly tuned SQL. Look at the SQLs with high levels of buffer gets in the STATSPACK report. Note: Multiple buffer pools were implemented, however, no successful runs were executed to test or tune the suggested recommendation. Modifications made included: Edit to initOPR1.ora parameter file - BUFFER_POOL_KEEP=(buffers:40000, lru_latches:4) - Increase DB_BLOCK_BUFFERS to 100000 Caching hot tables
alter alter alter alter alter alter alter table table table table table table table applsys.FND_LOOKUP_VALUES storage(buffer_pool keep); applsys.FND_MENU_ENTRIES storage(buffer_pool keep); applsys.FND_PROFILE_OPTIONS storage(buffer_pool keep); applsys.FND_FORM_FUNCTIONS storage(buffer_pool keep); ar.HZ_LOCATIONS storage(buffer_pool keep); applsys.FND_PROFILE_OPTION_VALUES storage(buffer_pool keep); applsys.FND_USER storage(buffer_pool keep);

Cache Buffer Handles This latch protects the State Objects that are needed by a process to make a change to a block/buffer.

Library Cache This latch protects operations with the library cache itself. Contention for the library cache is usually cased by heavy use of literal SQL or very poor shared pool configuration. Each STATSPACK report has a section of problem SQL with high buffer gets (CPU intensive) and high physical reads (I/O intensive). These SQL statements will need to be reviewed and modified to improve library cache contention. Recommendation Refer to Document# Note:62143.1 Understanding and Tuning the Shared Pool in Oracle7, Oracle8, and Oracle8i on metalink

Shared Pool This latch is used to protect critical operations when allocating and freeing memory in the shared pool. Some of the statements requesting for large memory allocations and causing shared pool memory to be aged out were identified using the following query:
select * from x$ksmlru where ksmlrsiz > 0;

And for almost every scenario run these SQL statements included: Recommendation Refer to Note:62143.1 Understanding and Tuning the Shared Pool in Oracle7, Oracle8, and Oracle8i on Oracle Metalink Using multiple buffer pools (Refer to Cache Buffer Chain section) Increasing the SHARED_POOL_SIZE Note: SHARED_POOL_RESERVED_SIZE It was noted that there was not enough portion in the shared pool allocated for large memory allocations. This was noted on a couple runs using the query,
select request_failures, last_failure_size, request_misses, free_space from v$shared_pool_reserved; REQUEST_FAILURES LAST_FAILURE_SIZE REQUEST_MISSES FREE_SPACE ---------------- ----------------- -------------- ---------12 4256 0 276968440

Increased SHARED_POOL_RESERVED_SIZE (REQUEST_FAILURES > 0) as noted in Database Changes section of this document. Increased SHARED_POOL_SIZE as noted in the Database Changes section of this document Row Cache Objects Recommendation: Increase SHARED_POOL_SIZE Note: Increased SHARED_POOL_SIZE as noted in the Database Changes section of this document

Wait Events
Wait events included buffer busy waits db file sequential read db file scattered read enqueue library cache pin

Buffer Busy Waits


Buffer Busy Wait event occurs when a session wants to access a database block in the buffer cache but it cannot as the buffer is "busy". The two main causes for this wait are: Another session is reading the block into the buffer Another session holds the buffer in an incompatible mode to our request
Tablespace Name APPLSYSD APPLSYSX ARD ARX INVD INVX RBS POD GLD File Name /var/opt/oracle/disk002/OPR1/applsysd_OPR1_ XX.dbf /var/opt/oracle/disk004/OPR1/applsysx_OPR1_ XX.dbf /var/opt/oracle/disk001/OPR1/ard_OPR1_ XX.dbf /var/opt/oracle/disk005/OPR1/ARX_OPR1_ XX.dbf

** **

/var/opt/oracle/disk001/OPR1/invd_OPR1_01.dbf /var/opt/oracle/disk005/OPR1/INVX_OPR1_06.dbf *dependent on each run /var/opt/oracle/disk003/OPR1/POD_OPR1_03.dbf /var/opt/oracle/disk002/OPR1/GLD_OPR1_04.dbf

XXATTIBCD Table 3. Tablespaces and/or Files experiencing high buffer waits ** means this file had buffer waits on all scenario runs Drill down to locate more information on what is causing the waits. This information can be derived from executing the following query while a scenario is running
select p1, p2 from v$session_wait where event='buffer busy waits';

The following objects were identified as having busy buffer waits OWNER * * * *
APPLSYS APPLSYS APPLSYS APPLSYS APPLSYS AP AR AR ASO

Table Name
FND_CONCURRENT_QUEUES FND_USER WF_NOTIFICATIONS WF_ITEM_ACTIVITY_STATUSES WF_ITEM_ATTRIBUTE_VALUES AP_INVOICES_ALL RA_REMIT_TOS_ALL RA_CUSTOMER_TRX_ALL ASO_PRICE_ADJUSTMENTS

File Name
/var/opt/oracle/disk001/OPR1/applsysd_OPR1_01.dbf /var/opt/oracle/disk001/OPR1/applsysd_OPR1_01.dbf /var/opt/oracle/disk001/OPR1/applsysd_OPR1_01.dbf /var/opt/oracle/disk001/OPR1/applsysd_OPR1_01.dbf /var/opt/oracle/disk001/OPR1/applsysd_OPR1_01.dbf /var/opt/oracle/disk001/OPR1/apd_OPR1_01.dbf /var/opt/oracle/disk001/OPR1/ard_OPR1_01.dbf /var/opt/oracle/disk001/OPR1/ard_OPR1_01.dbf /var/opt/oracle/disk001/OPR1/asod_OPR1_01.dbf

Table 4. Tables experiencing buffer busy waits ** means these objects had buffer waits on all scenario runs OWNER * APPLSYS * APPLSYS
APPLSYS APPLSYS

Table Name
WF_ITEMS_PK WF_ITEM_ACTIVITY_STATUSES_PK WF_ITEM_ACTIVITY_STATUSES_N1 WF_ITEM_ACTIVITY_STATUSES_N3

File Name
/var/opt/oracle/disk004/OPR1/applsysx_OPR1_01.dbf /var/opt/oracle/disk004/OPR1/applsysx_OPR1_01.dbf /var/opt/oracle/disk004/OPR1/applsysx_OPR1_01.dbf /var/opt/oracle/disk004/OPR1/applsysx_OPR1_01.dbf

Table 5. Indexes experiencing buffer busy waits ** means these indexes had buffer waits on all scenario runs Recommendation: Eliminating the cause of the contention is the best option. "buffer busy waits" for data blocks are often due to several processes repeatedly reading the same blocks e.g., many sessions scanning the same index This can be accomplished by Rebuild segments with a higher number of freelists or freelist groups to Eliminating HOT blocks from the application. Checking for repeatedly scanned / unselective indexes. Changing PCTFREE and/or PCTUSED. Check for 'right- hand-indexes' (indexes that get inserted into at the same point by many processes). Increasing INITRANS. Reduce the number of rows per block.

Note: Free lists were added to some of the tables, however, due to time limitations of the PT cycle, no successful runs (of 5000 users) were executed to test or tune the suggested recommendation. There was a run of 2500 users done. Modifications made included:
alter alter alter alter alter alter alter table table table table table table table applsys.WF_ITEM_ACTIVITY_STATUSES storage(freelists 10 ); applsys.FND_CONCURRENT_QUEUES storage(freelists 10 ); applsys.FND_USER storage(freelists 10 ); applsys.WF_NOTIFICATIONS storage(freelists 10 ); applsys.WF_ITEM_ATTRIBUTE_VALUES storage(freelists 10 ); ap.AP_INVOICES_ALL storage(freelists 10 ); ar.RA_REMIT_TOS_ALL storage(freelists 10 );

Db File Sequential Reads


db file sequential reads wait happens when a session is waiting for an IO read to complete. This usually indicates single block reads. Because IO is a normal activity so you are really interested in unnecessary or slow IO activity. By looking at the File I/O and/or Tablespace I/O you get information on which tablespaces and or files are servicing the most IO requests and to get an indication of the speed of the IO subsystem.
Tablespace Name APPLSYSD APPLSYSX ARD ARX BOM BOMX INVD INVX SYSTEM File Name /var/opt/oracle/disk002/OPR1/applsysd_OPR1_ XX.dbf /var/opt/oracle/disk004/OPR1/applsysx_OPR1_ XX.dbf /var/opt/oracle/disk001/OPR1/ard_OPR1_ XX.dbf /var/opt/oracle/disk005/OPR1/ARX_OPR1_ XX. dbf /var/opt/oracle/disk001/OPR1/bomd_OPR1_01.dbf /var/opt/oracle/disk005/OPR1/bomx_OPR1_02.dbf /var/opt/oracle/disk001/OPR1/invd_OPR1_01.dbf /var/opt/oracle/disk005/OPR1/INVX_OPR1_06.dbf /var/opt/oracle/disk006/OPR1/system_OPR1_ XX.dbf

* *

Table 6. Tablespace and/or Files experiencing high I/O Where XX means most of the database file had high buffer waits Drill down to locate more information on what is causing the waits. This information can be derived from executing the following query while a scenario is running
select p1, p2 from v$session_wait where event= db file sequential read;

The following objects were identified as having db file sequential read waits
* * * * * * OWNER SYS SYS AR AR AR AR AR AR AR BOM CS SYS SYS APPLSYS INV OE Table Name IDL_UB1$ HIST_HEAD$ HZ_LOCATIONS HZ_ACCOUNTS HZ_CUST_ACCT_SITES_ALL HZ_CUST_SITE_USES_ALL HZ_CUSTOMER_PROFILES HZ_PARTIES RA_CUSTOMER_TRX_ALL BOM_INVENTORY_COMPONENTS CS_CONTRACTS_ALL JAVASNM$ OBJ$ FND_LOOKUP_VALUES_U1 MTL_SYSTEM_ITEMS_TL_U1 SO_LINE_DETAILS_INTERFACE_N2 File Name /var/opt/oracle/disk006/OPR1/system_OPR1_01.dbf /var/opt/oracle/disk006/OPR1/system_OPR1_04.dbf /var/opt/oracle/disk001/OPR1/ard_OPR1_01.dbf /var/opt/oracle/disk001/OPR1/ard_OPR1_01.dbf /var/opt/oracle/disk001/OPR1/ard_OPR1_01.dbf /var/opt/oracle/disk001/OPR1/ard_OPR1_01.dbf /var/opt/oracle/disk001/OPR1/ard_OPR1_01.dbf /var/opt/oracle/disk001/OPR1/ard_OPR1_01.dbf /var/opt/oracle/disk001/OPR1/ard_OPR1_01.dbf /var/opt/oracle/disk001/OPR1/bomd_OPR1_01.dbf /var/opt/oracle/disk001/OPR1/csd_OPR1_01.dbf /var/opt/oracle/disk006/OPR1/system_OPR1_02.dbf /var/opt/oracle/disk006/OPR1/system_OPR1_03.dbf /var/opt/oracle/disk004/OPR1/applsysx_OPR1_01.dbf /var/opt/oracle/disk004/OPR1/invx_OPR1_01.dbf /var/opt/oracle/disk004/OPR1/oex_OPR1_01.dbf

Table 7. Tables experiencing db file sequential read The $IDL_* store PL/SQL objects
OWNER APPLSYS INV OE Table Name FND_LOOKUP_VALUES_U1 MTL_SYSTEM_ITEMS_TL_U1 SO_LINE_DETAILS_INTERFACE_N2 File Name /var/opt/oracle/disk004/OPR1/applsysx_OPR1_01.dbf /var/opt/oracle/disk004/OPR1/invx_OPR1_01.dbf /var/opt/oracle/disk004/OPR1/oex_OPR1_01.dbf

Table 8. Indexes experiencing db file sequential read

Recommendations: Block reads are unavoidable. Note: DB_BLOCK_BUFFERS were increased, however, no successful runs were executed to test or tune the suggested recommendation. How well data is clustered physically can affect I/O rates. Pre-sorting or re-organizing data can help to resolve some of the waits. Eliminating db file sequential read can be best eliminated by good application design and efficient execution plans. Increasing the DB_BLOCK_BUFFERS can also improve the waits on this event.

Db File Scattered Read


db file scattered read wait occur when a session is waiting for a multiblock IO to complete. This wait usually happens during full table scans or index fast full scans.

By looking at the File I/O and/or Tablespace I/O you get information on which tablespaces and/or files are which segment/s we are performing the reads against. You can determine which which tablespaces / files are servicing multiblock reads by using the formula BLKS_READ/READS>1. Drill down to locate more information on what is causing the waits. This information can be derived from executing the following query while a scenario is running
select p1, p2 from v$session_wait where event= db file scattered read and total_waits>0;

The following objects were identified as having db file scattered read waits
* OWNER SYS Cluster Name C_OBJ# File Name /var/opt/oracle/disk006/OPR1/system_OPR1_02.dbf

Table 9. Clusters experiencing db file scattered read waits


OWNER ONT BOM APPLSYS PO Table Name OE_ORDER_LINES_ALL BOM_EXPLOSION_TEMP WF_ITEM_ACTIVITY_STATUSES PO_HEADERS_ALL File Name /var/opt/oracle/disk002/OPR1/ontd_OPR1_02.dbf /var/opt/oracle/disk002/OPR1/BOMD_OPR1_07.dbf /var/opt/oracle/disk002/OPR1/APPLSYSD_OPR1_06.dbf /var/opt/oracle/disk002/OPR1/pod_OPR1_01.dbf

Table 10. Tables experiencing db file scattered read waits Recommendations Use more selective ways to get the data without repeatedly performing full table scans in online portions of an application when there is a faster more selective way to get. For applications tables that table scanning is needed the following methods might help: Set DB_FILE_MULTIBLOCK_READ_COUNT to a high value. CACHE tables that are frequently scanned. Refer, to the CACHE BUFFER CHAINS section on hot tables. Also, see section on using multiply buffer pools.

Enqueue
TX locks are acquired when transactions initiate their first changes and holds them until the transactions COMMIT or ROLLBACK. Tables involved in TX locks Table Owner APPLSYS APPLSYS APPLSYS APPLSYS APPLSYS INV WSH XXATORCL Table 11. Tables having TX enqueues Table Name FND_CONCURRENT_PROCESSES FND_CONC_RELEASE_STATES FND_USER FND_FLEX_HASH FND_CONC_RELEASE_STATES MTL_ONHAND_QUANTITIES WSH_DELIVERY_DETAILS XXAT_EXIM_QUEUE

Other Performance Considerations


Periodically delete Oracle Applications log files. (Reference Doc ID: Note:124621.1 Purge Concurrent Request FNDCCPPUR, Doc ID: Note:132823.1 Managing Concurrent Processing Files and Tables) Schedule Gather Schema statistics for Oracle Applications schemas to run regularly. Schedule WR_PURGE to purge workflow tables.

Summary
There is a need to execute the same set of transactions and concurrent requests in order to investigate and verify the performance bottlenecks. With varied workloads it is difficult to gauge which database or application changes may have caused a performance improvement or degradation. The workload varied in some ways during the different runs, so it is not clear if the database changes implemented contributed to any changes in performance or not. Some variations added to the workload included Configurator scripts, Firedam test, and/or Workstream concurrent processes.

S-ar putea să vă placă și