Sunteți pe pagina 1din 18

PL/SQL PERFORMANCE TUNNING

After completing this lesson, you should be able to do the following: Tune PL/SQL code ** Identify and tune memory issues ** Recognize network issues Lesson Objective It is important to build applications that perform the necessary processing as fast and efficiently as possible. To build reliable, robust applications, developers need to be aware of several key concepts. In this lesson, you learn about advanced PL/SQL techniques that help them develop programs that are tuned, efficient and reliable. This lesson also discusses guidelines for tuning PL/SQL code and utilizing memory effectively. Tune your PL/SQL code to meet performance standards: Use the RETURNING clause Understand bulk binds Rephrase conditional statements Identify data type and constraint issues Use dynamic SQL Compare SQL to PL/SQL Trace PL/SQL code Tuning PL/SQL Code By tuning your PL/SQL code, you can tailor its performance to best meet your needs. In the following sections you learn about some of the main PL/SQL tuning issues that can improve the performance of your PL/SQL applications. These issues are related to: The RETURNING clause Bulk binds Conditional statements Data type and constraint issues SQL compared to PL/SQL PL/SQL code tracing Instructor Note Tuning SQL statements is covered in the SQL Statement Tuning course. Instructor Note (for page 4) You can demonstrate this code with the 7_4n.sql script file. The script shows how the returning clause can save you an extra SQL statement. Alternatively, demonstrate the script file ret_sal.sql which is very similar to the code shown in 7_4n.sql, but it also contains the equivalent processing without the use of the RETURNING clause. The RETURNING clause is available with version 8.0 and later. Include the RETURNING clause with an INSERT, UPDATE, and DELETE to return column values. PROCEDURE update_salary (p_emp_id NUMBER) IS v_name VARCHAR2(15); v_new_sal NUMBER; BEGIN UPDATE emp SET sal = sal * 1.1 WHERE empno = p_emp_id RETURNING ename, sal INTO v_name, v_new_sal; END update_salary;

The RETURNING Clause


Often, applications need information about the row affected by a SQL operation, for example, to generate a report or take a subsequent action. The INSERT, UPDATE, and DELETE statements can include a RETURNING clause, which returns column values from the affected row into PL/SQL variables or host variables. This eliminates the need to SELECT the row after an insert or update, or before a delete. As a result, fewer network round trips, less server CPU time, fewer cursors, and less server memory are required. In the following example, you update the salary of an employee and at the same time retrieve the employees new salary into a SQL*Plus environment variable. CREATE OR REPLACE PROCEDURE raise_salary (p_in_id IN emp.empno%TYPE, o_sal OUT NUMBER) IS BEGIN UPDATE emp SET sal = sal * 1.10 WHERE empno = p_in_id RETURNING sal INTO o_sal; END raise_salary; SQL> VARIABLE g_sal NUMBER SQL> EXECUTE raise_salary(7839, :g_sal) SQL> PRINT g_sal FORALL j IN 1..1000 INSERT (OrderId(j), OrderDate(j), );

Bulk Binding
The Oracle server uses two engines to run PL/SQL blocks and subprograms: the PL/SQL engine and the SQL engine. The PL/SQL engine runs procedural statements but sends the SQL statements to the SQL engine, which parses and executes the SQL statement and, in some cases, returns data to the PL/SQL engine. During execution, every SQL statement causes a context switch between the two engines, which results in a performance penalty. Performance can be improved substantially by minimizing the number of context switches required to run a particular block or subprogram. When a SQL statement runs inside a loop that uses collection elements as bind variables, the large number of context switches required by the block can cause poor performance. As you have already learned, collections include nested tables, VARRAYs, index-by tables, and host arrays. Binding is the assignment of values to PL/SQL variables in SQL statements. Bulk binding is binding an entire collection at once. Without bulk binding, the elements in a collection are sent to the SQL engine individually, whereas bulk binds pass the entire collection back and forth between the two engines. Improved Performance Using bulk binding, you can improve performance by reducing the number of context switches required to run SQL statements that use collection elements. With bulk binding, entire collections, not just individual elements, are passed back and forth. The more rows affected by a SQL statement, the greater is the performance gain with bulk binding. Bind whole arrays of values at once, rather than loop to perform fetch, insert, update, and delete on multiple rows. Keywords to support bulk binding: FORALL instructs the PL/SQL engine to bulk-bind input collections before sending them to the SQL engine. FORALL index IN lower_bound..upper_bound sql_statement; BULK COLLECT instructs the SQL engine to bulk-bind output collections before returning them to the PL/SQL engine. ... BULK COLLECT INTO collection_name[,collection_name] ...

Using Bulk Binding


Use bulk binds to improve the performance of: DML statements that reference collection elements SELECT statements that reference collection elements Cursor FOR loops that reference collections and the RETURNING INTO clause Keywords to Support Bulk Binding FORALL The keyword FORALL instructs the PL/SQL engine to bulk-bind input collections before sending them to the SQL engine. Although the FORALL statement contains an iteration scheme, it is not a FOR loop. BULK COLLECT The keywords BULK COLLECT instruct the SQL engine to bulk-bind output collections before returning them to the PL/SQL engine. This allows you to bind locations into which SQL can return retrieved values in bulk. Thus you can use these keywords in the SELECT INTO, FETCH INTO, and RETURNING INTO clauses. DECLARE TYPE Numlist IS VARRAY (100) OF NUMBER; Id NUMLIST := NUMLIST(7902, 7698, 7839); BEGIN FORALL i IN Id.FIRST..Id.LAST -- bulk-bind the -- VARRAY UPDATE emp SET sal = 1.1 * sal WHERE mgr = Id(i); END; / Bulk Binding Example In the example in the slide, the PL/SQL block increases the salary for employees whose managers ID number is 7902, 7698, or 7839. It uses the FORALL keyword to bulk-bind the collection. Note that a looping construct is no longer required above when using this feature. Without bulk binding, the PL/SQL block would have sent a SQL statement to the SQL engine for each employee that is updated. If there are many employees to update, then the large number of context switches between the PL/SQL engine and the SQL engine can hurt performance. However, the FORALL keyword bulk-binds the collection to improve performance. DECLARE TYPE Emplist IS VARRAY(100) OF NUMBER; TYPE Numlist IS TABLE OF emp.sal%TYPE; Empids EMPLIST := EMPLIST(7369, 7499, 7521, 7566, 7654, 7698); Bonlist NUMLIST; BEGIN FORALL i IN Empids.FIRST..Empids.LAST UPDATE emp SET comm = 0.1 * sal WHERE empno = Empids(i) RETURNING sal BULK COLLECT INTO Bonlist; END; / Using BULK COLLECT INTO Bulk binds can be used to improve the performance of FOR loops that reference collections and return DML. If you have, or plan to have, PL/SQL code that does this, then you can use the FORALL keyword along with the BULK COLLECT INTO keywords to improve performance.

In the example shown, the sal information is retrieved from the emp table and collected into the bonlist collection. The bonlist collection is returned in bulk to the PL/SQL engine. Instructor Note You can demonstrate the code shown in the slide with the 7_8s.sql script. Demo: bulk_bind.sql Purpose: Display the effects of using bulk binds. In the code, 5,000 part numbers and names are loaded into index-by tables. Then, all table elements are inserted into a database table twice. First, they are inserted using a FOR loop. Then, they are bulk-inserted using a FORALL statement. When you execute the code, a message displays the execution time in seconds for both the FOR loop as well as the FORALL statement. Point out that the FORALL statement takes less time. Step: 1. Execute the code contained in bulk_bind.sql. In logical expressions, PL/SQL stops evaluating the expression as soon as the result is determined IF TRUE|FALSE OR (comm IS NULL) THEN ... END IF; IF credit_ok(cust_id) AND (loan < 5000) THEN ... END IF; Rephrase Conditional Control Statements In logical expressions, improve performance by tuning conditional constructs carefully. When evaluating a logical expression, PL/SQL stops evaluating the expression as soon as the result can be determined. For example, in the first scenario in the slide, which involves an OR expression, when the value of sal is less than 1,500 the left operand yields TRUE, so PL/SQL need not evaluate the right operand (because OR returns TRUE if either of its operands is true). Now, consider the second scenario in the slide, which involves an AND expression. The Boolean function CREDIT_OK is always called. However, if you switch the operands of AND as follows, the function is called only when the expression loan < 5000 is true (because AND returns TRUE only if both its operands are true): IF (loan < 5000) AND credit_ok(cust_id) THEN ... END IF; Instructor Note Demo: conditio.sql Purpose: Demonstrate the code evaluating a logical expression. PL/SQL stops evaluating the expression as soon as the result can be determined. Step: 1. Execute the script conditio.sql. PL/SQL performs implicit conversions between structurally different data types. Implicit conversion is common with NUMBER data types. Example: When assigning a pls_integer variable to a number variable Avoid Implicit Data Type Conversion PL/SQL does implicit conversions between structurally different types at run time. Currently, this is true even when the source item is a literal constant. Avoiding implicit conversions can improve performance. A common case where implicit conversions result in a performance penalty, but can be avoided, is with numeric types. For instance, assigning a pls_integer variable to a number variable or vice-versa results in a conversion, since their representations are different. Such implicit conversions can happen during parameter passing as well. Consider the following example: DECLARE n NUMBER;

BEGIN n := n + 15; -- converted n := n + 15.0; -- not converted ... END; The integer literal 15 is represented internally as a signed 4-byte quantity, so PL/SQL must convert it to an Oracle number before the addition. However, the floating-point literal 15.0 is represented as a 22-byte Oracle number, so no conversion is necessary. Prevent numeric to character type conversion. Consider the following statement: char_variable := 10; The literal 10 is converted to CHAR at runtime and then copied. Instead, use: char_variable := '10'; PL/SQL performs implicit conversions between structurally different data types. Implicit conversion is common with NUMBER data types. Example: When assigning a pls_integer variable to a number variable Use PLS_INTEGER for All Integer Operations When you need to declare an integer variable, use the PLS_INTEGER data type, which is the most efficient numeric type. That is because PLS_INTEGER values require less storage than INTEGER or NUMBER values, which are represented internally as 22-byte Oracle numbers. Also, PLS_INTEGER operations use machine arithmetic, so they are faster than BINARY_INTEGER, INTEGER, or NUMBER operations, which use library arithmetic. Furthermore, INTEGER, NATURAL, NATURALN, POSITIVE, POSITIVEN, and SIGNTYPE are constrained subtypes. Their variables require precision checking at run time, which can affect performance. Use Index-by Tables of Records and Objects Use Index-by Tables of Records and Objects In PL/SQL version 2.3, support for index-by table of records was added. Prior to that, developers modeled tables of records as a series of index-by tables of scalars (one for each record attribute). You should strongly consider rewriting such applications to use the table of records functionality. Doing so gives you the following benefits: ** Better application performance: The number of index-by table lookup, insert, copy, and other operations are a lot fewer if the data is packaged into an index-by table of record. ** Improved memory utilization: There is memory overhead associated with each index-by table, primarily due to its tree-structured implementation. Having fewer index-by tables helps cut this overhead and also reduces memory fragmentation caused by the varying size allocations in different tables. Instructor Note Demo: index_by.sql Purpose: demonstrate the use of cursors and index-by table attributes available in Oracle 7.3 and later. Step: 1. Execute the script index_by.sql Passing Records as Arguments You can declare user-defined records as formal parameters of procedures and functions as seen in the slide. Functions That Return a Record When calling a function that returns a record, use the following syntax to reference fields in the record: function_name(parameters).field_name

For example, the following call to the NTH_HIGHEST_SAL function references the field salary in the EMP_INFO record: DECLARE TYPE EmpRec IS RECORD ( emp_id NUMBER(4), job_title CHAR(14), salary REAL(7,2)); middle_sal REAL; FUNCTION nth_highest_sal (n INTEGER) RETURN EmpRec IS emp_info EmpRec; BEGIN ... RETURN emp_info; -- return record END; BEGIN middle_sal := nth_highest_sal(10).salary; -- call function Passing Index-by Tables as Arguments You can declare index-by tables (PL/SQL tables) as formal parameters of procedures and functions. In the example in the slide, index-by tables are declared as the formal parameters of two packaged procedures. nstructor Note You cannot bind a host variable to a PL/SQL record. You can bind a host variable to a PL/SQL index-by table of scalars but not to a tables of records. You can bind host variables to objects and collections of persistent types. The NOT NULL Constraint In PL/SQL, using the NOT NULL constraint incurs a small performance cost. Therefore use it with care. Consider the example on the left in the slide, which uses the NOT NULL constraint for m. Because m is constrained by NOT NULL, the value of the expression a + b is assigned to a temporary variable, which is then tested for nullity. If the variable is not null, its value is assigned to m. Otherwise, an exception is raised. However, if m were not constrained, the value would be assigned to m directly. A more efficient way to write the same example is shown on the right in the slide. Note that the subtypes NATURALN and POSTIVEN are defined as NOT NULL subtypes of NATURAL and POSITIVE. Using them incurs the same performance cost as seen above. Rules for NULLs When working with NULLs, you can avoid some common mistakes by keeping in mind the following rules: Comparisons involving NULLs always yield NULL. Applying the logical operator NOT to a NULL yields NULL. In conditional control statements, if the condition yields NULL, its associated sequence of statements is not executed. In the example in the slide, you might expect the sequence of statements to execute because x and y seem unequal. But, NULLs are indeterminate. Whether or not x is equal to y is unknown. Therefore, the IF condition yields NULL, and the sequence of statements is bypassed. Rules for NULLs (continued) PL/SQL treats any zero-length string like a NULL. This includes values returned by character functions and Boolean expressions. For example, all three example statements shown in the slide assign NULLs to the target variables. Use the IS NULL operator to test for null strings, as follows: IF my_string IS NULL THEN ... Use the concatenation operator with care, especially when an expression involves NULLs because the concatenation operator ignores NULL operands, as seen in the last example in the slide.

Provides native support for dynamic SQL directly in the PL/SQL language Provides the ability to dynamically execute SQL statements whose complete text is unknown until execution time Is used to place dynamic SQL statements directly into PL/SQL blocks Uses these statements to support this feature: EXECUTE IMMEDIATE OPEN FETCH CLOSE

What Is Native Dynamic SQL?


In past releases of the Oracle server, the only way to implement dynamic SQL in a PL/SQL application was by using the dbms_sql package. Oracle8i introduces native dynamic SQL, an alternative to the dbms_sql package. Using native dynamic SQL, you can place dynamic SQL statements directly into PL/SQL blocks. Native dynamic SQL in PL/SQL is easier to use than dbms_sql, requires much less application code, and performs better. Native dynamic SQL provides the ability to dynamically execute SQL statements whose complete text is not known until execution time. The dynamic statements could be data manipulation language (DML) statements (including queries), PL/SQL anonymous blocks, data definition language (DDL) statements, transaction control statements, session control statements, and so on. The following statements have been added or extended in PL/SQL to support native dynamic SQL: EXECUTE IMMEDIATE: Prepares a statement, executes it, returns variables, and then deallocates resources OPEN: Prepares and executes a statement FETCH: Retrieves the results of an opened statement CLOSE: Closes the cursor and deallocates resources Also, bind arguments can be specified for the dynamic parameters in the EXECUTE IMMEDIATE and OPEN statements. Native dynamic SQL includes the capability to bind to, or define, a dynamic statement, instances of any SQL data types supported in PL/SQL, and the ability to handle IN, IN OUT, and OUT bind variables that are bound by position, not by name. CREATE PROCEDURE insert_into_table (table_name VARCHAR2, deptnumber NUMBER, deptname VARCHAR2, location VARCHAR2) IS stmt_str VARCHAR2(200); BEGIN stmt_str := 'INSERT INTO ' || table_name || ' values (:deptno, :dname, :loc)'; EXECUTE IMMEDIATE stmt_str USING deptnumber, deptname, location; END insert_into_table; /

Native Dynamic SQL Example


The example in the slide uses the native dynamic SQL feature. In this example, the INSERT statement is built at run time in the stmt_str string variable using values passed in as arguments to the insert_into_table procedure. The SQL statement held in stmt_str is then executed by way of the EXECUTE IMMEDIATE statement. The bind variables :deptno, :dname, and :loc are bound to the arguments of the USING clause, which in this case are the parameters deptnumber, deptname, and location. To achieve the same result using dbms_sql, you need to write many more lines of code, as seen in the equivalent code example on the next page.

Code Example: dbms_sql The following example performs the same operation as the example on the previous page, but uses dbms_sql instead of native dynamic SQL. CREATE PROCEDURE insert_into_table ( table_name VARCHAR2, deptnumber NUMBER, deptname VARCHAR2, location VARCHAR2) IS cur_hdl INTEGER; stmt_str VARCHAR2(200); rows_processed BINARY_INTEGER; BEGIN stmt_str := 'INSERT INTO ' || table_name || ' VALUES (:deptno, :dname, :loc)'; -- open cursor cur_hdl := dbms_sql.open_cursor; -- parse cursor dbms_sql.parse(cur_hdl, stmt_str, dbms_sql.native); -- supply binds dbms_sql.bind_variable(cur_hdl, ':deptno', deptnumber); dbms_sql.bind_variable(cur_hdl, ':dname', deptname); dbms_sql.bind_variable(cur_hdl, ':loc', location); -- execute cursor rows_processed := dbms_sql.execute(cur_hdl); -- close cursor dbms_sql.close_cursor(cur_hdl); END; /

Advantages of Native Dynamic SQL Over dbms_sql


Is easier to use than dbms_sql and requires less code Enhances performance because the PL/SQL interpreter provides native support for it Advantages of Using Native Dynamic SQL Native dynamic SQL provides the following advantages over the dbms_sql package: Ease of Use Native dynamic SQL is much simpler to use than the dbms_sql package. Because native dynamic SQL is integrated with SQL, you can use it in the same way that you currently use static SQL within PL/SQL code. In addition, native dynamic SQL code is typically more compact and readable than equivalent code that uses the dbms_sql package. The dbms_sql package is not as easy to use as native dynamic SQL. There are many procedures and functions that must be used in a strict sequence. Typically, performing simple operations requires a large amount of code when you use the dbms_sql package, as seen in the examples in the previous section. Performance Improvement Native dynamic SQL performs significantly better than dbms_sql, as the PL/SQL interpreter has native support for native dynamic SQL. The dbms_sql approach is based on a procedural API, and as a result, suffers from high procedure call and data copy overhead. For example, on every bind, the dbms_sql package implementation copies the PL/SQL bind variable into its space for use during the execution phase. Similarly, on every fetch, the data is first copied into the space that the dbms_sql package manages, and subsequently, the fetched data is copied, one column at a time, into the appropriate PL/SQL variables, which results in substantial data-copying overhead.

Advantages of Native Dynamic SQL Over dbms_sql


Supports all types supported by static SQL in PL/SQL, including user-defined types Can fetch rows directly into PL/SQL records

Advantages of Using Native Dynamic SQL (continued) Support for User-Defined Types Native dynamic SQL supports all of the types supported by static SQL in PL/SQL. Therefore, native dynamic SQL provides support for user-defined types, such as user-defined objects, collections, and REFs. The dbms_sql package does not support these user-defined types. However, it has limited support for arrays. Support for Fetching into Records With native dynamic SQL, the rows resulting from a query can be directly fetched into PL/SQL records. The dbms_sql package does not support fetching into records. Advantages of the DBMS_SQL Package While native dynamic SQL offers ease of use and better performance, the dbms_sql package provides a few advantages over native dynamic SQL: The dbms_sql package is supported in client-side programs, but native dynamic SQL is not. You can use the describe_columns procedure in the dbms_sql package to describe the columns for a cursor opened and parsed through dbms_sql. Native dynamic SQL does not have a describe facility. Bulk SQL is the ability to process multiple rows of data in a single DML statement. Bulk SQL improves performance by reducing the amount of context switching between SQL and the host language. Currently, the dbms_sql package supports bulk dynamic SQL. Although there is no direct support for bulk operations in native dynamic SQL, you can simulate a native dynamic bulk SQL statement by placing the bulk SQL statement in a BEGIN ... END block and executing the block dynamically. The dbms_sql package supports statements with a RETURNING clause that update or delete multiple rows. Native dynamic SQL supports a RETURNING clause only if a single row is returned. The PARSE procedure in the dbms_sql package parses a SQL statement once. After the initial parsing, the statement can be used multiple times with different sets of bind arguments. In contrast, native dynamic SQL prepares a SQL statement for execution each time the statement is used. Note: Preparing a statement each time it is used incurs a small performance penalty. However, Oracles shared cursor mechanism minimizes the cost, and the performance penalty is typically trivial when compared to the performance benefits of native dynamic SQL.

SQL Versus PL/SQL


SQL processes sets of data as groups rather than as individual units. It uses statements that are complex and powerful individually, and that therefore stand alone. The flow-control statements of most programming languages are absent in SQL, although they are provided in Oracles extension to standard SQL, PL/SQL. While there are advantages to using PL/SQL over SQL in several cases, use PL/SQL with caution, especially under the following circumstances: When performing high volume inserts When using user-defined PL/SQL functions When using external procedure calls When using the utl_file package as an alternative to SQL*Plus in high volume reporting circumstances

SQL processes sets of data as a group rather than as individual units. PL/SQL offers flow-control statements that are absent in SQL. Use PL/SQL with caution when: Performing high volume inserts Using user-defined PL/SQL functions Using external procedure calls Using utl_file package

SQL Versus PL/SQL (continued) The SQL statement above is a great deal faster than the equivalent PL/SQL loop. The availability of PL/SQL may cause some developers to write procedural code when SQL would work better. Simple set processing operations can run markedly faster than the equivalent PL/SQL loop. Correlated Updates However, there are occasions when you will get better performance from PL/SQL even when the process could be written in SQL. Correlated updates are slow. With correlated updates, a better method is to only access correct rows using PL/SQL. The following PL/SQL loop is faster than the equivalent correlated update SQL statement. DECLARE CURSOR raise IS SELECT deptno, increase FROM emp_raise; BEGIN FOR dept IN raise LOOP UPDATE big_emp SET sal = sal * dept.increase WHERE deptno = dept.deptno; END LOOP; ...

Tracing PL/SQL Execution


In large and complex PL/SQL applications, it can sometimes get difficult to keep track of subprogram calls when a number of them call each other. By tracing your PL/SQL code you can get a clearer idea of the paths and order in which your programs execute. While a facility to trace your SQL code has been around for a while, Oracle now provides an API for tracing the execution of PL/SQL programs on the server. You can use the Trace API, implemented on the server as the dbms_trace package, to trace PL/SQL subprogram code. Note: You cannot use PL/SQL tracing with the multithreaded server (MTS). Tracing PL/SQL execution provides you with a better understanding of the program execution path, and is possible by using the dbms_trace package. The dbms_trace Programs dbms_trace provides subprograms to start and stop PL/SQL tracing in a session. The trace data gets collected as the program executes, and it is written out to either data dictionary tables (starting with Oracle8 i, release 8.1.6) or to an Oracle server trace file (prior to release 8.1.6). A typical trace session involves: Enabling specific subprograms for trace data collection (optional) Starting the PL/SQL tracing session (dbms_trace.set_plsql_trace) Running the application which is to be traced Stopping the PL/SQL tracing session (dbms_trace.clear_plsql_trace) Instructor Note

The trace level constants are explained on the next page. There are integer values which correspond to each trace level, but it is recommended that you use the constant value instead of the integer value. Using set_plsql_trace, select a trace level to identify how to trace calls, exceptions, SQL, and lines of code. Trace level constants: trace_all_calls trace_enabled_calls trace_all_sql trace_enabled_sql trace_all_exceptions trace_enabled_exceptions trace_enabled_lines Specifying a Trace Level During the trace session, there are two levels you can specify to trace calls, exceptions, SQL, and lines of code. Trace Calls Level 1: Trace all calls. This corresponds to the constant trace_all_calls. Level 2: Trace calls to enabled program units only. This corresponds to the constant trace_enabled_calls. Trace Exceptions Level 1: Trace all exceptions. This corresponds to trace_all_exceptions. Level 2: Trace exceptions raised in enabled program units only. This corresponds to trace_enabled_exceptions. Trace SQL Level 1: Trace all SQL. This corresponds to the constant trace_all_sql. Level 2: Trace SQL in enabled program units only. This corresponds to the constant trace_enabled_sql. Trace Lines Level 1: Trace all lines. This corresponds to the constant trace_all_lines. Level 2: Trace lines in enabled program units only. This corresponds to the constant trace_enabled_lines. Enable specific program units for trace data collection. 2. Use dbms_trace.set_plsql_trace to identify a trace level. 3. Start tracing by running your PL/SQL code. 4. Use dbms_trace.clear_plsql_trace to stop tracing data. 5. Read and interpret the trace information.

Steps to Trace PL/SQL Code


To trace PL/SQL code using the dbms_trace package, there are five steps: 1. Enable specific program units for trace data collection. 2. Use dbms_trace.set_plsql_trace to identify a trace level. 3. Run your PL/SQL code. 4. Use dbms_trace.clear_plsql_trace to stop tracing data. 5. Read and interpret the trace information. The next few pages demonstrate the steps to accomplish PL/SQL tracing. Enable specific subprograms with one of two methods: Enable a subprogram by compiling it with the debug option: ALTER SESSION SET PLSQL_DEBUG=true; CREATE OR REPLACE .... ALTER [PROCEDURE | FUNCTION | PACKAGE BODY]

<subprogram-name> COMPILE DEBUG;

Step 1: Enable Specific Subprograms Profiling large applications may produce a huge volume of data that can be difficult to manage. Before turning on the trace facility, you have the option to control the volume of data collected by enabling specific subprogram for trace data collection. You can enable a subprogram by compiling it with the debug option. You do this in one of two ways: Enable a subprogram by compiling it with the ALTER SESSION debug option, then compile the program unit by using CREATE OR REPLACE syntax: ALTER SESSION SET PLSQL_DEBUG=true; CREATE OR REPLACE ... Or alternatively, recompile a specific subprogram with the debug option: ALTER [PROCEDURE | FUNCTION | PACKAGE BODY] <subprogram-name> COMPILE DEBUG; Note: The second method cannot be used for anonymous blocks. Enabling specific subprograms allows you to: Limit and control the amount of trace data, especially in large applications. Obtain additional trace information which is otherwise not available. For example, during the tracing session, if a subprogram calls another subprogram, the name of the called subprogram gets included in the trace data if the calling subprogram was enabled by compiling it debug. Identify the trace level by using dbms_trace.set_plsql_trace EXECUTE my_program EXECUTE DBMS_TRACE.SET_PLSQL_TRACE (tracelevel1 + tracelevel2 ....) Steps 2 and 3: Identify a Trace Level and Start Tracing To trace PL/SQL code execution by using dbms_trace, follow these steps: Start the trace session using the syntax in the slide. For example: EXECUTE DBMS_TRACE.SET_PLSQL_TRACE(DBMS_TRACE.trace_all_calls) Note: To specify additional trace levels in the argument, use the + sign between each trace level value. Execute the PL/SQL code. The trace data gets written to either the Oracle server trace file or to the data dictionary views. EXECUTE DBMS_TRACE.CLEAR_PLSQL_TRACE Step 4: Turn Off Tracing When you are done tracing the PL/SQL program unit, turn tracing off by executing dbms_trace.clear_plsql_trace. This stops the any more writing to the trace file. To avoid the overhead of writing the trace information, it is more efficient to turn of the tracing when you are not using it. Instructor Note Demo: trace_it.sql Purpose: Create a procedure named raise_sal which is then traced by using dbms_trace. Steps: 1. Execute the script trace_it.sql. 2. Follow the prompts in the script. 3. Find the trace information. Oracle8i release 8.1.6 and later - look in the data dictionary views plsql_trace_runs and plsql_trace_events. Prior to release 8.1.6 - look in your Oracle home\UDUMP directory for a .trc file. 4. Display the contents of the trace information to the class. Examine the trace information in either the data dictionary or in trace files:

Oracle8i, release 8.1.6 uses dictionary views Prior to release 8.1.6 trace files are generated Call tracing writes out the program unit type, name, and stack depth Exception tracing writes out the line number Step 5: Examine the Trace Information Lower trace levels supersede higher levels when tracing is activated for multiple tracing levels. If tracing is requested only for enabled subprograms, and if the current subprogram is not enabled, then no trace data is written. If the current subprogram is enabled, then call tracing writes out the subprogram type, name, and stack depth. If the current subprogram is not enabled, then call tracing writes out the subprogram type, line number, and stack depth. Exception tracing writes out the line number. Raising the exception shows information on whether the exception is user-defined or predefined and, in the case of predefined exceptions, the exception number. Instructor Note An enabled subprogram is one that is compiled with the debug option. plsql_trace_runs and plsql_trace_events Oracle8i, release 8.1.6 writes trace information to dictionary views plsql_trace_runs dictionary view plsql_trace_events dictionary view Run the tracetab.sql script to create the dictionary views Need privileges to view the trace information in the dictionary views plsql_trace_runs and plsql_trace_events Dictionary Views Starting with Oracle8i, release 8.1.6, all trace information is written to the dictionary views plsql_trace_runs and plsql_trace_events. These views are created (typically by a DBA) by running the tracetab.sql script. Once the script is run, you need the SELECT privilege to view information from these dictionary views. SELECT proc_name, proc_line, event_proc_name, event_comment FROM plsql_trace_events WHERE event_proc_name = 'RAISE_SAL' OR PROC_NAME = 'RAISE_SAL'; Query the plsql_trace_runs and plsql_trace_events Views Use the dictionary views plsql_trace_runs and plsql_trace_events to view trace information generated by using the dbms_trace facility. plsql_trace_runs hold generic information about traced programs such as the date, time, owner, and name of the traced stored program. dbms_trace_events holds more specific information on the traced subprograms. The Trace File Prior to release 8.1.6 trace files are generated Trace file has a .trc extension. File is placed in the Oracle home\UDUMP directory. The Trace File A file with a .trc extension is generated during the tracing. This file is placed in your Oracle_home\ADMIN\o8i\UDUMP directory. The information in the trace file includes the program unit type, name, and stack depth. If exceptions are traced, the line on which the exception occurred is placed in the trace file.

Identify and Tune Memory Issues


Tuning the Size of the Shared Pool of the SGA When you invoke a program element, such as a procedure or a package, its compiled version is loaded into the shared pool memory area, if it is not already present there. It remains there until the memory is needed by other resources and the package has not been used recently. If it gets flushed out from memory, the next time any object in the package is needed, the whole package has to be loaded in memory again, which involves time and maintenance to make space for it. If the package is already present in the shared memory area, your code executes faster. It is, therefore, important to make sure that packages that are used very frequently are always present in memory. The larger the shared pool area, the more likely it is that the package remains in memory. However, if the shared pool area is too large, you waste memory. When tuning the shared pool, make sure it is large enough to hold all the frequently needed objects in your application. Note: Tuning the shared pool is usually a DBAs responsibility. Pinning: Is used so that objects avoid the Oracle least recently used (LRU) mechanism and do not get flushed out of memory. Is applied with the help of the dbms_shared_pool package: dbms_shared_pool.keep dbms_shared_pool.unkeep dbms_shared_pool.sizes What Is Pinning a Package? Sizing the shared pool properly is one of the ways of ensuring that frequently used objects are available in memory whenever needed, so that performance improves. Another way to improve performance is to pin frequently used packages in the shared pool. When a package is pinned, it is not aged out with the normal least recently used (LRU) mechanism that the Oracle server otherwise uses to flush out a least recently used package. The package remains in memory no matter how full the shared pool gets or how frequently you access the package. You pin packages with the help of the dbms_shared_pool package. This package contains three procedures: Instructor Note To create dbms_shared_pool package, run the dbmspool.sql script. The prvtpool.plb script is automatically executed after dbmspool.sql runs. These scripts are not run by catproc.sql. DBMS_SHARED_POOL.KEEP(object_name, flag) DBMS_SHARED_POOL.UNKEEP(object_name, flag) BEGIN DBMS_SHARED_POOL_KEEP ('SCOTT.EMP_PACK', 'P'); ... DBMS_SHARED_POOL_UNKEEP ('SCOTT.EMP_PACK', 'P'); ... END; ... Using dbms_shared_pool You can pin and unpin packages, procedures, functions, types, triggers and sequences. This may be useful for certain semi-frequently used large objects (larger than 20K), because when large objects are brought into the shared pool, a larger number of other objects (much more than the size of the object being brought in) may need to be aged out in order to create a contiguous area large enough. Pinning occurs when the dbms_shared_pool.keep procedure is invoked. Syntax Definitions Pin objects only when necessary. The keep procedure first queues an object for pinning before loading it. Pin all objects soon after instance startup to ensure contiguous blocks of memory

Guidelines for Pinning Objects Pin objects only when necessary otherwise you may end up setting aside too much memory, which can have a negative impact on performance. The keep procedure does not immediately load a package into the shared pool; it queues the package for pinning. The package is loaded into the shared pool only when the package is first referenced, either to execute a module or to use one of its declared objects, such as a global variable or a cursor. Pin all your objects in the shared pool as soon after instance startup as possible, so that contiguous blocks of memory can be set aside for large objects. Note: In Oracle8i, you can create a trigger that fires when the database is opened (STARTUP). Using this trigger is a good way to pin packages at the very beginning. Instructor Note Demo: Demonstrate the dbms_shared_pool.sizes procedure by typing: EXECUTE sys.DBMS_SHARED_POOL.SIZES (200) Purpose: This displays all items in the shared pool with a size greater than 200K.

Guidelines for Reducing Network Traffic


Reducing network traffic is one of the key components of tuning because network issues impact performance. When your code is passed to the database, a significant amount of time is spent in the network. The following are some guidelines for reducing network traffic to improve performance: When passing host cursor variables to PL/SQL, you can reduce network traffic by grouping OPEN-FOR statements. For example, the following PL/SQL block opens five cursor variables in a single round trip: /* anonymous PL/SQL block in host environment */ BEGIN OPEN :emp_cv FOR SELECT * FROM emp; OPEN :dept_cv FOR SELECT * FROM dept; OPEN :grade_cv FOR SELECT * FROM salgrade; OPEN :pay_cv FOR SELECT * FROM payroll; OPEN :ins_cv FOR SELECT * FROM insurance; END; When you pass host cursor variables to a PL/SQL block for opening, the query work areas to which they point remain accessible after the block completes so that your OCI or Pro*C program can use these work areas for ordinary cursor operations. When finished, simply close the cursors. Group OPEN-FOR statements when passing host cursor variables to PL/SQL. Use client-side PL/SQL when possible. Avoid unnecessary reparsing. Utilize array processing. Guidelines for Reducing Network Traffic (continued) If your application is written using development tools that have a PL/SQL engine in the client tool, as in the Oracle Developer tools, and the code is not SQL intensive, reduce the load on the server by doing more of your work in the client and let the client-side PL/SQL engine handle your PL/SQL code. When a PL/SQL block is sent from the client to the server, the client can keep a reference to the parsed statement. This reference is the statement handle when using OCI, or the cursor cache entry when using precompilers. If your application is likely to issue the same code more than once, it needs to parse it only the first time. For all subsequent executions, the original parsed statement can be used, possibly with different

values for the bind variables. This technique is more appropriate with OCI and precompilers because they give you more control over cursor processing.

In PL/SQL, this technique can be be used with the dbms_sql package, in which the interface is similar to OCI. Once a statement is parsed with dbms_sql.parse, it can be executed multiple times. OCI and precompilers have the ability to send and retrieve data using host arrays. With this technique, large amounts of data can travel over the network as one unit rather than taking several trips. While PL/SQL does not directly use this array interface, if you are using PL/SQL from OCI or precompilers, take advantage of this interface. Use the RETURNING clause. Summary There are several methods to help you tune your PL/SQL application. When tuning PL/SQL code, consider using the RETURNING clause and/or bulk binds to improve processing. Be aware of conditional statements with an OR clause. Place the fastest processing condition first. There are several data type and constraint issues that can help in tuning an application. Consider using dynamic SQL over dbms_sql. Lastly, you can trace your PL/SQL code by using the Oracle supplied package dbms_trace. Use the Oracle supplied package dbms_shared_pool to pin frequently used objects to the shared pool. You can reduce network traffic by: Reducing memory usage Using client-side PL/SQL Avoiding unnecessary parsing Utilizing array processing

Practice 7 1. You have a type called bank_account that you use very often. You would like to pin the type so that it does not get aged out of memory. a. Write the syntax using dbms_shared_pool to pin the type. b. Write the syntax to see the objects in the shared pool that are larger than 200 kilobytes. 2. Procedure total_comp computes an employees total compensation based on the employees salary and commission earnings. Not every employee earns a commission. Variable v_comm is constrained by NOT NULL to test for a NULL commission value. CREATE OR REPLACE PROCEDURE total_comp (p_empno IN NUMBER) IS v_sal NUMBER; v_comm NUMBER NOT NULL := 0; v_total_comp NUMBER; BEGIN SELECT sal, comm INTO v_sal, v_comm FROM emp WHERE empno = p_empno; v_total_comp := (v_sal + v_comm); DBMS_OUTPUT.PUT_LINE('Total compensation for employee ' || p_empno || ' is: ' || v_total_comp); EXCEPTION WHEN NO_DATA_FOUND THEN DBMS_OUTPUT.PUT_LINE('No such employee exists!'); END total_comp;

/ a. Modify the procedure code to avoid the NOT NULL constraint overhead, and test for the nullity of the commission value in the execution section. Open p7q2a.sql for the preceding code. Save your code as lsn7_2.sql. There are temporary employees who are unpaid interns that do not earn any salary or commission. Without using the NOT NULL constraint, modify your code in lsn7_2.sql to test for the nullity of both the salary and commission values.

b.

3.

Procedure add_dept uses dbms_sql to insert records into the my_dept table. Modify the code in the procedure to use native dynamic SQL instead of dbms_sql. a. Run the cre_my_dept_tab.sql script to create the my_dept table. b. Modify the procedure code in p7q3b.sql to use EXECUTE IMMEDIATE instead The current code follows: CREATE OR REPLACE PROCEDURE add_dept ( table_name VARCHAR2, deptnumber NUMBER, deptname VARCHAR2, location VARCHAR2) IS cur_hdl INTEGER; stmt_str VARCHAR2(200); rows_processed BINARY_INTEGER; BEGIN stmt_str := 'INSERT INTO ' || table_name || ' VALUES (:deptno, :dname, :loc)'; -- open cursor cur_hdl := dbms_sql.open_cursor; -- parse cursor dbms_sql.parse(cur_hdl, stmt_str, dbms_sql.native); -- supply binds dbms_sql.bind_variable(cur_hdl, ':deptno', deptnumber); dbms_sql.bind_variable(cur_hdl, ':dname', deptname); dbms_sql.bind_variable(cur_hdl, ':loc', location); -- execute cursor rows_processed := dbms_sql.execute(cur_hdl); -- close cursor dbms_sql.close_cursor(cur_hdl); END; / 4. Open p7q4a.sql. The PL/SQL block inserts all elements of the the index-by tables into the parts table, using a bulk FOR loop. a. Modify the code in p7q4a.sql. Add additional code to insert all the elements from the the index-by tables into the parts binding. DROP TABLE parts / set serveroutput on

of dbms_sql.

pnums and pnames table, using bulk

CREATE TABLE parts (pnum NUMBER(4), pname CHAR(15)) / DECLARE TYPE NumTab IS TABLE OF NUMBER(4) INDEX BY BINARY_INTEGER; TYPE NameTab IS TABLE OF CHAR(15) INDEX BY BINARY_INTEGER; pnums NumTab; pnames NameTab; t1 CHAR(5); t2 CHAR(5); t3 CHAR(5); PROCEDURE get_time (t OUT NUMBER) IS BEGIN SELECT TO_CHAR(SYSDATE,'SSSSS') INTO t FROM dual; END; BEGIN FOR j IN 1..5000 LOOP -- load index-by tables pnums(j) := j; pnames(j) := 'Part No. ' || TO_CHAR(j); END LOOP; get_time(t1); FOR i IN 1..5000 LOOP -- use FOR loop INSERT INTO parts VALUES (pnums(i), pnames(i)); END LOOP; get_time(t2); -- insert bulk binding statement here get_time(t3); DBMS_OUTPUT.PUT_LINE('Execution Time (secs)'); DBMS_OUTPUT.PUT_LINE('---------------------'); DBMS_OUTPUT.PUT_LINE('FOR loop: ' || TO_CHAR(t2 - t1)); DBMS_OUTPUT.PUT_LINE('FORALL: ' || TO_CHAR(t3 - t2)); END; / b. Execute the code to display the amount of execution time for the FOR loop and the technique.

bulk binding

S-ar putea să vă placă și