Sunteți pe pagina 1din 19

Enhanced SQL trace utility from Oracle

Oracle Tips by Burleson Consulting Here is a great new script from Carlos Sierra, a brilliant developer at Oracle Corporation. This script is an enhancement to the Center of Excellence (COE) script top produce a super-detailed trace of SQL execution. This new script is remarkable, and I hope that you enjoy it as much as I do. The MOSC page is reproduced here with Mr. Sierra's permission: I have reproduced it below for your convenience (see below). Carlos' script enhances standard execution plan analysis by providing:

1. Enhanced Explain Plan (including execution order, indexed columns, rows count and blocks for tables); 2. Schema Object Attributes for all Tables and Indexes accessed by the SQL statement being diagnosed
(including: object dependencies, tables, indexes, columns, partitions, sub-partitions, synonyms, policies, triggers and constraints);

3. CBO Statistics (for levels: table, index, partition, sub-partition and column); 4. Histograms (includes table, partition and sub-partition levels); 5. Space utilization and administration (for tables, indexes, partitions and sub-partitions); 6. Objects, Segments, Extents, Tablespaces and Datafiles (including current performance of datafiles); 7. RDBMS Initialization Parameters INIT.ORA (required, recommended and default for an APPS 11i
database, and all other parameters SET on the INIT.ORA file);

8. Source Code in which the SQL Statement and accessed Tables depend on (Triggers description and body,
Views columns and text, Packages specs and bodies, Procedures and Functions). SQLTXPLAIN.SQL - Enhanced Explain Plan and related diagnostic info for one SQL statement (8.1.5-9.2.0) SQLTXPLAIN.SQL is one SQL*Plus script that using a small staging repository and a PL/SQL package creates a comprehensive report to gather relevant information on ONE SQL statement (sql.txt). COE_XPLAIN.SQL performs the same functionality, with some limitations and restrictions. SQLTXPLAIN.SQL differs from COE_XPLAIN.SQL in the following ways:

1. SQLTXPLAIN.SQL collects more data about the objects in which the SQL Statement <sql.txt> depends
on. It uses V$OBJECT_DEPENDENCY to find these dependencies.

2. SQLTXPLAIN.SQL can be used by multiple users concurrently. It keeps all staging data organized by
unique STATEMENT_ID, therefore it can handle concurrency and historical data.

3. SQLTXPLAIN.SQL creates a better organized and documented report output. Report sections that are not
needed on a specific SQL Statement <sql.txt> are just skipped over in the new report without any headers or references.

4. SQLTXPLAIN.SQL allows to keep multiple versions of CBO Stats into the same table
SQLT$STATTAB. Therefore, similar sets of CBO Stats can be restored into the Data Dictionary multiple

times during a SQL Tuning exercise (without loosing original Stats).

5. SQLTXPLAIN.SQL is subject to future improvements and additions, while COE_XPLAIN.SQL is not. 6. SQLTXPLAIN.SQL performs better than COE_XPLAIN.SQL for same SQL Statement <sql.txt>. 7. SQLTXPLAIN.SQL reports sup-partitions details. 8. SQLTXPLAIN.SQL reports actual LOW and HIGH 'boiled' values of all columns on tables being
accessed. It also reports Histograms in a more comprehensive format.

9. SQLTXPLAIN.SQL does not report some data shown on COE_XPLAIN.SQL that was actually not used
during SQL Tuning exercises, making the new report easier to understand.

10. COE_XPLAIN.SQL evolved during 2 years, while SQLTXPLAIN.SQL was designed from scratch.

Create Oracle SQL Profile For Tuning


Looking for how to tune a SQL statement by creating a SQL Profile? The query optimizer can
sometimes produce inaccurate estimates about an attribute of a statement due to lack of information, leading to poor execution plans. Automatic SQL Tuning deals with this problem with its SQL Profiling capability. The Automatic Tuning Optimizer creates a profile of the SQL statement called a SQL Profile, consisting of auxiliary statistics specific to that statement. During SQL Profiling, the Automatic Tuning Optimizer also uses execution history information of the SQL statement to appropriately set optimizer parameter settings, such as changing the OPTIMIZER_MODE initialization parameter setting from ALL_ROWS to FIRST_ROWS for that SQL statement. The output of this type of analysis is a recommendation to accept the SQL Profile. A SQL Profile, once accepted, is stored persistently in the data dictionary. Note that the SQL Profile is specific to a particular query. If accepted, the optimizer under normal mode uses the information in the SQL Profile in conjunction with regular database statistics when generating an execution plan. The availability of the additional information makes it possible to produce well-tuned plans for corresponding SQL statement without requiring any change to the application code.

It is important to note that the SQL Profile does not freeze the execution plan of a SQL statement, as done by stored outlines. As tables grow or indexes are created or dropped, the execution plan can change with the same SQL Profile. The information stored in it continues to be relevant even as the data distribution or access path of the corresponding statement change. However, over a long period of time, its content can become outdated and would have to be regenerated. This can be done by running Automatic SQL Tuning again on the same statement to regenerate the SQL Profile. Heres the set of SQL statement you can use to trace the execution time of ACTIVE running SQL query that you wish to tune. -- Get The SQL_ID From Active Session SQL SELECT

b.sid, b.event,

b.serial#, b.action,

a.spid,

b.sql_id, b.p2text,

b.program, b.p3text,

b.osuser, b.state,

b.machine, b.TYPE, b.p1text, c.sql_text,b.logon_time


FROM v$process a, v$session b, v$sqltext c WHERE a.addr=b.paddr AND b.sql_hash_value = c.hash_value AND b.STATUS = 'ACTIVE' AND a.spid = '11696' ORDER BY a.spid, c.piece -- Trace SQL Query Execution Time Using SQL ID SELECT

sql_id,

child_number,

plan_hash_value

plan_hash,

executions

execs,
(elapsed_time/1000000)/decode(nvl(executions,0),0,1,executions)

avg_etime, buffer_gets/decode(nvl(executions,0),0,1,executions) avg_lio, sql_text


FROM v$sql s WHERE s.sql_id='4n01r8z5hgfru'

-- Append the /* + gather_table_statistics */ hint into SQL Statement and execute SELECT /* + gather_table_statistics */ sysdate ... (SQL Statement)

Note: The execution OF SQL statement WITH hint /* + gather_table_statistics


*/ will

subsequently provide the detail plan execution TIME IN the NEXT query below.
-- Get The Detail Explain Plan Using SQL ID

SELECT

plan_table_output

FROM

TABLE(dbms_xplan.display_cursor('dtdqt19kfv6yx')) -- Use Longops To Check The Estimation Runtime SELECT

sid,

serial#,

opname,

target,

sofar,

totalwork,

units,

start_time, last_update_time, time_remaining "REMAIN SEC", round(time_remaining/60,2)


"REMAIN MINS",

elapsed_seconds "ELAPSED SEC", round(elapsed_seconds/60,2) "ELAPSED MINS", round((time_remaining+elapsed_seconds)/60,2)"TOTAL MINS", message TIME
FROM v$session_longops WHERE sofar<>totalwork AND time_remaining <> '0'

Creating a SQL Profile Using DBMS_SQLTUNE


SQL Profiles (commonly known as the SQL Tuning Advisor) were introduced in Oracle 10g. The feature tunes queries by gathering information about data distribution, relations between the columns and joined tables and more useful optimizer information. It provides recommendations which, when implemented are associated to the query and are used by the optimizer at parse time. The following will create a SQL Profile:

SQL> DBMS_SQLTUNE.IMPORT_SQL_PROFILE(sql_text => 'FULL QUERY TEXT', profile => sqlprof_attr('HINT SPECIFICATION WITH FULL OBJECT ALIASES'), name => 'PROFILE NAME', force_match => TRUE/FALSE);
FULL QUERY TEXT The value can be obtained from the the SQL_FULLTEXT column in table GV$SQLAREA.

SQL> SELECT SQL_FULLTEXT FROM GV$SQLAREA


WHERE sql_id = '4n01r8z5hgfru' HINT SPECIFICATION WITH FULL OBJECT ALIASES The hint specification can be obtained from the table DBA_HIST_SQL_PLAN. For example:

SQL> SELECT extractvalue(VALUE(d), '/hint') AS outline_hints


FROM

xmltable('/*/outline_data/hint' passing (
SELECT

xmltype(other_xml) AS xmlval
FROM

dba_hist_sql_plan
WHERE sql_id = '4n01r8z5hgfru'

AND plan_hash_value = '82930460' AND other_xml IS NOT NULL ) ) d; You can also generate the Trace File 10053 and look for the hint specification between BEGIN_OUTLINE_DATA and END_OUTLINE_DATA. Download SQLTXPLAIN.sql from Oracle Metalink and run it to get Trace File 10053. /*+

BEGIN_OUTLINE_DATA IGNORE_OPTIM_EMBEDDED_HINTS OPTIMIZER_FEATURES_ENABLE('10.2.0.3') OPT_PARAM('_b_tree_bitmap_plans' 'false') OPT_PARAM('_fast_full_scan_enabled' 'false') ALL_ROWS OUTLINE_LEAF(@"SEL$335DD26A") MERGE(@"SEL$3") OUTLINE_LEAF(@"SEL$7286615E") MERGE(@"SEL$5") OUTLINE_LEAF(@"SEL$1") ...... END_OUTLINE_DATA
*/ FORCE_MATCH is really the main reason for using SQL Profiles, when set to TRUE it will ignore literals with exact queries and implement the profile on them (just like what cursor_sharing=force does to the entire DB). For Example: When force match is set to TRUE, a.segment1 = 1234 will become a.segment1 = :b1 To create a SQL Profile a user must have the following: ADVISOR role, create any sql profile privilege, alter any sql profile privilege, drop any sql profile privilege and execute priviliege on DBMS_SQLTUNE.

SQL> GRANT EXECUTE ON SYS.DBMS_SQLTUNE TO <user>; SQL> GRANT ADVISOR TO <user>; SQL> GRANT CREATE ANY SQL PROFILE TO <user>; SQL> GRANT ALTER ANY SQL PROFILE TO <user>; SQL> GRANT DROP ANY SQL PROFILE TO <user>;
List of example to create Oracle SQL Profile: -

SQL> DBMS_SQLTUNE.IMPORT_SQL_PROFILE
(

sql_text => 'select * from emp',

profile => sqlprof_attr('ALL_ROWS','IGNORE_OPTIM_EMBEDDED_HINTS', ......


),

category => 'DEFAULT', name => 'change_emp', force_match => TRUE


); DECLARE

cl_sql_text CLOB;
BEGIN SELECT

sql_fulltext

INTO

cl_sql_text

FROM

gv$sqlarea

WHERE

sql_id='4n01r8z5hgfru'; DBMS_SQLTUNE.IMPORT_SQL_PROFILE( sql_text => cl_sql_text, profile => sqlprof_attr('HINT SPECIFICATION WITH FULL OBJECT ALIASES'), name => 'PROFILE NAME', force_match => TRUE);
END; / DECLARE

cl_sql_text CLOB; hint_spec sys.sqlprof_attr;


BEGIN SELECT

sql_fulltext

INTO

cl_sql_text

FROM

gv$sqlarea

WHERE

sql_id='gtwyx63711jp1';
SELECT

extractvalue(VALUE(d), '/hint') AS outline_hints BULK COLLECT


INTO

hint_spec
FROM

xmltable('/*/outline_data/hint' passing (

SELECT

xmltype(other_xml) AS xmlval
FROM

dba_hist_sql_plan
WHERE

sql_id = 'gtwyx63711jp1'
AND plan_hash_value = '82930460' AND other_xml IS NOT NULL ) ) d;

DBMS_SQLTUNE.IMPORT_SQL_PROFILE( sql_text => cl_sql_text, profile => hint_spec, name => 'PROFILE NAME', force_match => TRUE);
END; / Note: You may use the table v$sql_plan if there is no outline hints available in dba_hist_sql_plan. Once you have finish creating the Oracle SQL Profile, check on the database system for the new SQL Profile.

SQL> SELECT name, created FROM dba_sql_profiles ORDER BY created DESC; SQL> SELECT sql_attr.attr_val outline_hints
FROM dba_sql_profiles sql_profiles, sys.SQLPROF$ATTR sql_attr WHERE sql_profiles.signature = sql_attr.signature AND sql_profiles.name = 'PROFILE NAME' ORDER BY sql_attr.attr# ASC;

Dropping a SQL Profile Using DBMS_SQLTUNE


You can drop a SQL Profile with the DROP_SQL_PROFILE procedure. For example: BEGIN

DBMS_SQLTUNE.DROP_SQL_PROFILE(name => 'PROFILE NAME');


END; /

In this example, my_sql_profile is the name of the SQL Profile you want to drop. You can also specify whether to ignore errors raised if the name does not exist. For this example, the default value of FALSE is accepted.

Oracle SQL Profile Tuning Command


Oracle SQL Profile helps generate a better execution plan than the normal optimization. It is
now possible to track SQL behavior over time and ensure that all SQL is using an optimal execution plan since Oracle 10g provides the ability to track SQL execution metrics with new dba_hist tables, most notably dba_hist_sqlstat and dba_hist_sql_plan. When SQL statements are executed by the Oracle database, the query optimizer is used to generate the execution plans of the SQL statements. The query optimizer operates in two modes: a normal mode and a tuning mode. The query optimizer can sometimes produce inaccurate estimates about an attribute of a statement due to lack of information, leading to poor execution plans. Traditionally, users have corrected this problem by manually adding hints to the application code to guide the optimizer into making correct decisions. For packaged applications, changing application code is not an option and the only alternative available is to log a bug with the application vendor and wait for a fix. Automatic SQL tuning deals with this problem with its SQL profiling capability. The Automatic Tuning Optimizer creates a profile of the SQL statement called a SQL Profile, consisting of auxiliary statistics specific to that statement. The query optimizer under normal mode makes estimates about cardinality, selectivity, and cost that can sometimes be off by a significant amount resulting in poor execution plans. SQL Profile addresses this problem by collecting additional information using sampling and partial execution techniques to verify and, if necessary, adjust these estimates. During SQL Profiling, the Automatic Tuning Optimizer also uses execution history information of the SQL statement to appropriately set optimizer parameter settings, such as changing the OPTIMIZER_MODE initialization parameter setting from ALL_ROWS to FIRST_ROWS for that SQL statement.

REPRODUCE AN EXECUTION PLAN FROM ONE SYSTEM INTO ANOTHER

You need two similar systems: SOURCE and TARGET. SQLT must be installed in both. SOURCE and TARGET must have the same schema objects (ie. PROD, TEST, DEV, QA, etc.). Required files are generated in SOURCE when SQLT is executed. Steps: 1. Import into staging table in TARGET the CBO Stats generated by SQLT in SOURCE, connecting as SQLTXPLAIN.

UNIX>

imp

SQLTXPLAIN/<pwd>

tables='sqlt$_stattab'

file=sqlt_s3407.dmp

ignore=y 2. Restore the CBO Stats from staging table into the data dictionary, connecting as SQLTXPLAIN, SYSTEM, SYSDBA or the application user.

SQL> START sqlt/utl/sqltimp.SQL s3407_prd1_db NULL


3. Review and set the optimizer environment environment, connected as the application user.

SQL> START sqlt_s3407_prd1_db_setenv.SQL


4. Use SQLT XECUTE or XPLAIN methods, connected as the application user. Notes: 1. SOURCE and TARGET should be similar and contain the same schema objects. 2. RDBMS release from TARGET should be equal or greater than SOURCE.

CREATE A SQLT TEST CASE


You need two similar systems: SOURCE and TARGET. SQLT must be installed in both. SOURCE and TARGET could even be on same server and database, under different schemas. Required files are generated in SOURCE when SQLT is executed. Steps: 1. Review metadata script generated in SOURCE and execute it into TARGET, connected as SYSTEM or SYSDBA.

SQL> START sqlt_s3407_prd1_db_metadata.SQL


In most cases you want to consolidate all schema objects into one test case user, (for example TC3407). 2. Import CBO Stats generated in SOURCE into staging table in TARGET, connecting as SQLTXPLAIN.

UNIX>

imp

SQLTXPLAIN/<pwd>

tables='sqlt$_stattab'

file=sqlt_s3407.dmp

ignore=y 3. Restore the CBO Stats from staging table into the data dictionary, connecting as SQLTXPLAIN, SYSTEM, SYSDBA or the application user (for example TC3407):

SQL> START sqlt/utl/sqltimp.SQL s3407_prd1_db TC3407


If you decided in metadata step 1 to create schema objects into their original owner(s) use syntax below instead. Notice the null as the 2nd parameter.

SQL> START sqlt/utl/sqltimp.SQL s3407_prd1_db NULL


4. Review and set the optimizer environment environment, connected as the application user.

SQL> START sqlt_s3407_prd1_db_setenv.SQL


5. Use SQLT XECUTE or XPLAIN methods to reproduce desired plan, connected as the application user. Notes: 1. If SOURCE and TARGET are on same system (different schemas), then step 2 is redundant. 2. SOURCE and TARGET should be similar in all senses. 3. RDBMS release from TARGET should be equal or greater than SOURCE.

CREATE A STAND-ALONE TEST CASE BASED ON A SQLT TEST CASE


You need to create first a SQLT Test Case following instructions above.

Instructions below apply when schema objects were consolidated into on TC user TC3407. If method used in SQLT Test Case was XPLAIN, you will need to modify script with one SQL so it can be executed stand alone in step 4 (you may need to replace binds). Steps: 1. Export CBO Stats captured automatically during step 5 of SQLT Test Case, connecting as TC3407.

UNIX> exp TC3407/TC3407 tables=CBO_STAT_TAB_4TC file=STATTAB.dmp \


statistics=none log=STATTAB.log 2. Write stand-alone Test Case instructions into a readme.txt file. Suggested content follows: -- create test case user TC3407 and schema objects:

UNIX> sqlplus / AS sysdba; SQL> START sqlt_s3407_prd1_db_metadata.SQL;


-- import and restore cbo stats:

UNIX> imp TC3407/TC3407 TABLES=CBO_STAT_TAB_4TC file=STATTAB.dmp IGNORE=y UNIX> sqlplus TC3407/TC3407 SQL> EXEC DBMS_STATS.IMPORT_SCHEMA_STATS('TC3407', 'CBO_STAT_TAB_4TC');
-- set cbo environment and generate 10053

UNIX> sqlplus TC3407/TC3407 SQL> START sqlt_s3407_prd1_db_setenv.SQL; SQL> ALTER SESSION SET TRACEFILE_IDENTIFIER = "TC3407_10053"; SQL> ALTER SESSION SET EVENTS '10053 TRACE NAME CONTEXT FOREVER, LEVEL 1'; SQL> DEF unique_id = TC3407

SQL> START sqlt_s3407_prd1_db_tc_script.SQL;


3. Create a TC3407 directory and place into it the following files:

o CBO Stats dump file FROM step 1: o Instructions file FROM step 2: o o o Metadata CBO Script
SET

STATTAB.dmp readme.txt used script


SQL IN FROM FROM

script Environment
WITH

SQLT SQLT SQLT

TC: TC: TC:

sqlt_s3407_prd1_db_metadata.SQL sqlt_s3407_prd1_db_setenv.SQL one sqlt_s3407_prd1_db_tc_script.SQL


sqlt_s3407_prd1_db_tc_script.sql must be executable stand-alone (binds declaration and assignemt, or replaced by literals). 4. Fully test your stand-alone TC following your own readme.txt created in step 2. 5. ZIP stand-alone TC directory as TC3407.zip. Notes: 1. Use the readme.txt file that you followed when you created the SQLT Test Case.

GATHER A CBO STATISTICS BASELINE


If to suspecting generate a poor new schema baseline. object Then, CBO test statitics, (any use commands on your below SQL.

SQLT

method)

Connect as SYSTEM, SYSDBA, or the application user. Baseline using FND_STATS:

SQL> EXEC FND_STATS.GATHER_TABLE_STATS(ownname => '"GL"', tabname => '"GL_JE_HEADERS"', percent => 100, cascade => TRUE); SQL> EXEC FND_STATS.GATHER_TABLE_STATS(ownname => '"GL"', tabname => '"GL_JE_LINES"', percent => 10, cascade => TRUE); SQL> EXEC FND_STATS.GATHER_TABLE_STATS(ownname => '"GL"', tabname => '"GL_JE_SOURCES_TL"', percent => 100, cascade => TRUE); TRANSFER A STORED OUTLINE
If your SQL uses an Stored Outline, you can export the SO from SOURCE

and import it into TARGET. Steps: 1. Export Stored Outline from SOURCE connecting as OUTLN, SYSTEM or SYSDBA.

UNIX> exp system/<pwd> tables=outln.ol% file=sqlt_s3407_outln.dmp \

statistics=none

query=\"WHERE

ol_name

\'<stored_outline_name>\'\"

log=sqlt_exp_outln.log 2. Import Stored Outline into TARGET connecting as OUTLN, SYSTEM or SYSDBA.

UNIX>
Notes:

imp

system/<pwd>

file=sqlt_s3407_outln.dmp

fromuser=outln

touser=outln IGNORE=y
1. If TARGET already contains a Stored Outline for your SQL, find its name and drop it before import step. Connect as OUTLN, SYSTEM or SYSDBA to drop an outline.

SQL> DROP OUTLINE <stored_outline_name>; TRANSFER A SQL PROFILE


To transfer a SQL Profile you need to pack and export it from SOURCE, then import and unpack it into TARGET. Connect as SQLTXPLAIN, SYSTEM or SYSDBA Steps 1. Create staging table in SOURCE

SQL>

EXEC

DBMS_SQLTUNE.CREATE_STGTAB_SQLPROF

(TABLE_NAME

=>

'STGTAB_SQLPROF',

schema_name => USER);


2. Pack SQL Profile into staging table in SOURCE

SQL>

EXEC

DBMS_SQLTUNE.PACK_STGTAB_SQLPROF

(profile_name

=>

'<sql_profile_name>',

profile_category => 'DEFAULT', staging_table_name => 'STGTAB_SQLPROF', staging_schema_owner => USER);


3. Export staging table from SOURCE

UNIX> exp <usr>/<pwd> tables=stgtab_sqlprof file=sqlprof.dmp \


statistics=none log=sqlprof_exp.log 4. Import staging table into TARGET

UNIX>

imp

<usr>/<pwd>

tables=stgtab_sqlprof

file=sqlprof.dmp

ignore=y

log=sqlprof_imp.log 5. Unpack SQL Profile from staging table in TARGET

SQL>

EXEC

DBMS_SQLTUNE.UNPACK_STGTAB_SQLPROF
=> 'DEFAULT', REPLACE => TRUE,

(profile_name

=> =>

'<sql_profile_name>',

profile_category
'STGTAB_SQLPROF',

staging_table_name

staging_schema_owner => USER);

Notes: 1. table. Connect with same user in both SOURCE and TARGET. 2. User must have CREATE ANY SQL PROFILE privilege and the SELECT privilege on staging

CREATE A STORED OUTLINE


If you want to create an Stored Ouline for your SQL, execute these commands connected as the application user:

SQL> ALTER SESSION SET create_stored_outlines = TRUE; SQL>


EXEC

DBMS_OUTLN.CREATE_OUTLINE

(hash_value

=>

644832611,

child_number => 0); SQL> ALTER SESSION SET create_stored_outlines = FALSE; SQL>
Notes: 1. User must have CREATE ANY OUTLINE grant or DBA role. 2. Set your optimizer environment first (you may want to use the setenv script). SELECT * FROM

DBA_OUTLINES

WHERE

signature

'914E567776565E496F27F2C5B3C0F9D2';

EXTRACT A PLAN FROM MEMORY OR AWR AND PIN IT TO A SQL IN SAME OR DIFFERENT SYSTEM
SQLT XTRACT and XECUTE, record into SQLT repository all known plans for one SQL. Any of these plans can be extracted and then associate to that SQL in same SOURCE or similar TARGET system by using a manual custom SQL Profile. Connect as SQLTXPLAIN, SYSDBA, or the application user Steps 1. Execute sqltprofile utility in SOURCE, connecting as SQLTXPLAIN, SYSDBA, or the application user

SQL> START sqlt/utl/sqltprofile.SQL s3407_prd1_db <plan_hash_value>


Both parameters can be entered inline, else the script will list valid values. 2. Execute generated script in SOURCE or TARGET system, connecting as SQLTXPLAIN, SYSTEM or SYSDBA

SQL> START sqlt_s3407_prd1_dbp<plan_hash_value>_sqlprof.SQL;


Notes: 1. Generated script calls DBMS_SQLTUNE.IMPORT_SQL_PROFILE, which generates a manual custom SQL Profile based on the hints from plan_table.other_xml. 2. If SQLT is not installed, use sqlt/utl/coe_xfr_sql_profile.sql instead.

Migrate Oracle SQL Profile


Looking for steps on how to migrate Oracle SQL Profile? SQL Profile can be exported and
imported just like stored outlines, but with a different procedure. There is a different procedure for migrating SQL profiles in Oracle 10g release one and the migration procedure changes in Oracle 10g Release 2. You use DBMS_SQLTUNE subprograms to move SQL Profiles and SQL Tuning Sets from one system to another using a common programmatic model. In both cases, you create a staging table on the source system and populate that staging table with the relevant data.

TRANSFER A SQL PROFILE


To transfer a SQL Profile you need to pack and export it from SOURCE, then import and unpack it into TARGET. Connect as SQLTXPLAIN, SYSTEM or SYSDBA 1. Create staging table in SOURCE

SQL>

EXEC

DBMS_SQLTUNE.CREATE_STGTAB_SQLPROF

(TABLE_NAME

=>

'STGTAB_SQLPROF',

schema_name => USER);


2. Pack SQL Profile into staging table in SOURCE

SQL>

EXEC

DBMS_SQLTUNE.PACK_STGTAB_SQLPROF

(profile_name

=>

'<sql_profile_name>',

profile_category => 'DEFAULT', staging_table_name => 'STGTAB_SQLPROF', staging_schema_owner => USER);


3. Export staging table from SOURCE

UNIX> exp <usr>/<pwd> tables=stgtab_sqlprof file=sqlprof.dmp \


statistics=none log=sqlprof_exp.log 4. Import staging table into TARGET

UNIX>

imp

<usr>/<pwd>

tables=stgtab_sqlprof

file=sqlprof.dmp

ignore=y

log=sqlprof_imp.log 5. Grant necessary privilege to the user to execute DBMS_SQLTUNE

SQL> GRANT EXECUTE ON SYS.DBMS_SQLTUNE TO <user>; SQL> GRANT ADVISOR TO <user>; SQL> GRANT CREATE ANY SQL PROFILE TO <user>; SQL> GRANT ALTER ANY SQL PROFILE TO <user>; SQL> GRANT DROP ANY SQL PROFILE TO <user>;

6. Unpack SQL Profile from staging table in TARGET

SQL>

EXEC

DBMS_SQLTUNE.UNPACK_STGTAB_SQLPROF
=> 'DEFAULT', REPLACE => TRUE,

(profile_name

=> =>

'<sql_profile_name>',

profile_category
'STGTAB_SQLPROF',

staging_table_name

staging_schema_owner => USER);


Notes: 1. table. Once you have finish migrating the Oracle SQL Profile, check on the TARGET system for the new profile created. Connect with same user in both SOURCE and TARGET. 2. User must have CREATE ANY SQL PROFILE privilege and the SELECT privilege on staging

SQL> SELECT name, created FROM dba_sql_profiles ORDER BY created DESC;

Oracle Database Performance Tuning


Why
and when should one tune? One of the biggest responsibilities of a DBA is to ensure that the Oracle database is tuned properly. The Oracle RDBMS is highly tunable and allows the database to be monitored and adjusted to increase its performance. One should do performance tuning for the following reasons: * The speed of computing might be wasting valuable human time (users waiting for response); * Enable your system to keep-up with the speed business is conducted; and * Optimize hardware usage to save money (companies are spending millions on hardware).

Where should the tuning effort be directed?


Consider the following areas for tuning. The order in which steps are listed needs to be maintained to prevent tuning side effects. For example, it is no good increasing the buffer cache if you can reduce I/O by rewriting a SQL statement.

Database Design:
Poor system performance usually results from a poor database design. One should generally normalize to the 3NF. Selective denormalization can provide valuable performance improvements. When designing, always keep the data access path in mind. Also look at

proper data partitioning, data replication, aggregation tables for decision support systems, etc.

Application Tuning:
Experience showed that approximately 80% of all Oracle system performance problems are resolved by coding optimal SQL. Also consider proper scheduling of batch tasks after peak working hours.

Memory Tuning:
Properly size your database buffers (shared_pool, buffer cache, log buffer, etc) by looking at your wait events, buffer hit ratios, system swapping and paging, etc. You may also want to pin large objects into memory to prevent frequent reloads.

Disk I/O Tuning:


Database files needs to be properly sized and placed to provide maximum disk subsystem throughput. Also look for frequent disk sorts, full table scans, missing indexes, row chaining, data fragmentation, etc.

Eliminate Database Contention:


Study database locks, latches and wait events carefully and eliminate where possible.

Tune the Operating System:


Monitor and tune operating system CPU, I/O and memory utilization. For more information, read the related Oracle FAQ dealing with your specific operating system.

What tools/utilities does Oracle provide to assist with performance tuning?


Oracle provide the following tools/ utilities to assist with performance monitoring and tuning: * * * * Oracle Enterprise Manager Tuning Pack (cost * Old UTLBSTAT.SQL and UTLESTAT.SQL Begin and end stats monitoring ADDM (Automated Database Diagnostics Monitor) introduced in Oracle 10g

TKProf Statspack option)

When is cost based optimization triggered?


Its important to have statistics on all tables for the CBO (Cost Based Optimizer) to work correctly. If one table involved in a statement does not have statistics, Oracle has to revert to rule-based optimization for that statement. So you really want for all tables to have statistics right away; it wont help much to just have the larger tables analyzed.

Generally, the CBO can change the execution plan when you: * Change statistics of objects by doing an ANALYZE;

* Change some initialization parameters (for example: hash_join_enabled, sort_area_size, db_file_multiblock_read_count).

How can one optimize %XYZ% queries?


It is possible to improve %XYZ% (wildcard search) queries by forcing the optimizer to scan all the entries from the index instead of the table. This can be done by specifying hints. If the index is physically smaller than the table (which is usually the case) it will take less time to scan the entire index than to scan the entire table.

Where can one find I/O statistics per table?


The STATSPACK and UTLESTAT reports show I/O per tablespace. However, they do not show which tables in the tablespace has the most I/O operations. The $ORACLE_HOME/rdbms/admin/catio.sql script creates a sample_io procedure and table to gather the required information. After executing the procedure, one can do a simple SELECT * FROM io_per_object; to extract the required information. For more details, look at the header comments in the catio.sql script.

My query was fine last week and now it is slow. Why?


The likely cause of this is because the execution plan has changed. Generate a current explain plan of the offending query and compare it to a previous one that was taken when the query was performing well. Usually the previous plan is not available. Some factors that can cause a plan to change are: * Which tables are currently analyzed? Were they previously analyzed? (ie. Was the query using * * Has Has the RBO OPTIMIZER_MODE DEGREE what Have Has the INIT.ORA the parameter of parallelism and been been now changed defined/changed was statistics SORT_AREA_SIZE been in on any CBO?) INIT.ORA? table? used? changed? changed?

* Have the tables been re-analyzed? Were the tables analyzed using estimate or compute? If estimate, * * percentage

* Has the SPFILE/ INIT.ORA parameter DB_FILE_MULTIBLOCK_READ_COUNT been changed? * Have any other INIT.ORA parameters been changed?

What do you think the plan should be? Run the query with hints to see if this produces the required performance.

Why is Oracle not using the index?


This problem normally only arises when the query plan is being generated by the Cost Based Optimizer (CBO). The usual cause is because the CBO calculates that executing a Full Table Scan would be faster than accessing the table via the index. Fundamental things that can be checked are: * USER_TAB_COLUMNS.NUM_DISTINCT This column defines the number of distinct values the column holds. * USER_TABLES.NUM_ROWS If NUM_DISTINCT = NUM_ROWS then using an index would be preferable to doing a FULL TABLE SCAN. As the NUM_DISTINCT decreases, the cost of using an index increase thereby making the index less desirable. * USER_INDEXES.CLUSTERING_FACTOR This defines how ordered the rows are in the index. If CLUSTERING_FACTOR approaches the number of blocks in the table, the rows are ordered. If it approaches the number of rows in the table, the rows are randomly ordered. In such a case, it is unlikely that index entries in the same leaf block will point to rows in the same data blocks. * Decrease the INIT.ORA parameter DB_FILE_MULTIBLOCK_READ_COUNT A higher value will make the cost of a FULL TABLE SCAN cheaper. Remember that you MUST supply the leading column of an index, for the index to be used (unless you use a FAST FULL SCAN or SKIP SCANNING). There are many other factors that affect the cost, but sometimes the above can help to show why an index is not being used by the CBO. If from checking the above you still feel that the query should be using an index, try specifying an index hint. Obtain an explain plan of the query either using TKPROF with TIMED_STATISTICS, so that one can see the CPU utilization, or with AUTOTRACE to see the statistics. Compare this to the explain plan when not using an index.

When should one rebuild an index?


You can run the ANALYZE INDEX VALIDATE STRUCTURE command on the affected indexes each invocation of this command creates a single row in the INDEX_STATS view. This row is overwritten by the next ANALYZE INDEX command, so copy the contents of the view into a local table after each ANALYZE. The badness of the index can then be judged by the ratio of DEL_LF_ROWS to LF_ROWS.

How does one tune Oracle Wait event XYZ?


Here are some of the wait events from V$SESSION_WAIT and V$SYSTEM_EVENT views:

* db file sequential read: Tune SQL to do less I/O. Make sure all objects are analyzed. Redistribute contention I/O from across disks. SYS.V$BH * buffer busy waits: Increase DB_CACHE_SIZE (DB_BLOCK_BUFFERS prior to 9i)/ Analyze * log buffer space: Increase LOG_BUFFER parameter or move log files to faster disks What is the difference between DB File Sequential and Scattered Reads? Both db file sequential read and db file scattered read events signify time waited for I/O read requests to complete. Time is reported in 100s of a second for Oracle 8i releases and below, and 1000s of a second for Oracle 9i and above. Most people confuse these events with each other as they think of how data is read from disk. Instead they should think of how data is read into the SGA buffer cache.

db file sequential read:


A sequential read operation reads data into contiguous memory (usually a single-block read with p3=1, but can be multiple blocks). Single block I/Os are usually the result of using indexes. This event is also used for rebuilding the controlfile and reading datafile headers (P2=1). In general, this event is indicative of disk contention on index reads.

db file scattered read:


Similar to db file sequential reads, except that the session is reading multiple data blocks and scatters them into different discontinuous buffers in the SGA. This statistic is NORMALLY indicating disk contention on full table scans. Rarely, data from full table scans could be fitted into a contiguous buffer area, these waits would then show up as sequential reads instead of scattered reads

S-ar putea să vă placă și