|Everything Oracle||Home||Everything Oracle|
|Oracle 11g PL/SQL Performance Tuning Checklist|
The spectrum of uses for PL/SQL ranges from database focused tasks, the “bread and butter” of transactional business databases, to the computationally intensive tasks that are more often found in engineering and scientific databases. PL/SQL is an interpreted language and when it was first launched its performance was relatively poor all round. It lacked the optimizations needed to efficiently process queries and DML. And in the case of compute-intensive tasks, performance was up to two orders of magnitude worse than that available from a natively compiled language.
The data focused performance problems have disappeared over the years as array processing optimization techniques, such as BULK_COLLECT and FORALL, have been introduced. And the compute-intensive performance gap has narrowed significantly with the introduction of native compilation, compute-friendly data types, such as SIMPLE_NUMBER, and the function cache that is now available in 11g – though converting a compute-intensive algorithm from PL/SQL to Java can still improve performance by a factor of, perhaps, five. Determining where performance problems are occurring within PL/SQL code has also become easier as the original line based profiler, DBMS_PROFILER, has been augmented in 11g by a hierarchical profiler, DBMS_HPROF, making it possible to determine the calling context and to separate out the overheads associated with subroutine calls.
Even when the procedural element of PL/SQL is not essential and the functionality could be implemented declaratively by means of SQL, using PL/SQL can still have advantages, as PL/SQL, acting as a wrapper for SQL statements, can be used to reduce levels of network traffic. The Oracle optimizer has improved over the years and hints can be used to direct its optimization path to some extent. However, when dealing with complex SQL statements it can be very difficult to produce the desired result. Even when hints do work they make SQL optimizations “fragile”, as a hinted optimization can easily break without any obvious means of repair if the SQL statement is changed even slightly. By using PL/SQL to break up a complex SQL statement into components that can be optimized individually, much greater control can be exercised, and the results are robust when the code is subsequently modified to meet new requirements.
The first point to make about PL/SQL tuning is that – compute-intensive tasks aside – tuning the SQL statements called from within PL/SQL is likely to have a far bigger impact on performance than tuning of the PL/SQL code itself, so the tuning of embedded SQL should be attempted first.
|Data Focused Optimizations|
Processing a large number of rows one at a time can lead to performance overheads, as the same block may be requested multiple times, leading to redundant logical reads. Array processing, by using the BULK_COLLECT statement with the LIMIT clause, allows rows to be fetched more efficiently in groups.
In 11g with at least the default level of optimization (PLSQL_OPTIMIZE_LEVEL set to 2 or higher) cursor FOR loops are automatically rewritten to make use of BULK_COLLECT with a LIMIT of 100. As the optimization is not too sensitive to the LIMIT value, the default optimization should be adequate in most cases (using very large values for LIMIT or omitting it altogether can cause memory contention).
In the case of DML operations that manipulate a large number rows the FORALL statement can be used to enforce array processing (note, array optimization is not enforced by default in the case of DML operations).
Trigger performance can be improved by ensuring that the PL/SQL trigger block is only entered when the trigger code will cause a change to the database. In the case of UPDATE statements, the FOR UPDATE clause can be used to restrict the trigger to the appropriate subset of table columns, and the WHEN clause can be used to exclude block entry on the basis of an arbitrary predicate.
Where a choice is available, an AFTER ROW trigger is usually better than a BEFORE ROW trigger, as a BEFORE ROW trigger will pre-emptively lock the context row.
Using bind variables reduces the parsing overhead and contention for latches and mutexes, by allowing a single parsed representation of a SQL statement to be executed many times. In the case of SQL statements that reference values by means of PL/SQL variables, there is no need to explicitly introduce bind variables, as PL/SQL variable references are automatically treated as bind variable references in most cases. The exception is dynamic SQL. In this case a bind variable reference must be added explicitly to the SQL statement and referenced in a USING clause.
Data Type Selection
Very considerable performance gains can be achieved by avoiding automatic data type conversions, and by selecting the most restrictive data type that will adequately represent the set of values required.
In the case of floating point numbers, either BINARY_FLOAT or BINARY_DOUBLE is more efficient than NUMBER. In the case of integers, PLS_INTEGER is more efficient than NUMBER, and SIMPLE_INTEGER is more efficient than PLS_INTEGER (SIMPLE_INTEGER can produce a performance gain over NUMBER of about 5 for interpreted code and 30 for machine code).
Call by Reference
Like many other programming languages PL/SQL allows variables to be passed by reference or by value. The default method is by value. However, when passing large values, such as PL/SQL collections, into or back from a function there can be a considerable performance overhead. Using the IN OUT NOCOPY or OUT NOCOPY constructs can be used to specify that the corresponding parameter should be passed by reference rather than by value. However, passing values by references is generally best avoided when the values are small as it prevents effective encapsulation and leads to more brittle code.
High levels of recursion should be avoided as each call requires the allocation of additional PGA memory, decreasing performance and reducing the memory available for use by other processes.
Loop Termination and Optimization
As in all programming languages, it makes sense to remove invariant statements from within loops and to exit loops as soon as possible with an EXIT statement once further iterations are no longer required.
Again, as in all programming languages, it makes good sense to order expressions combined by AND from the most likely to be false to the least likely to be false, and to order expressions combined by OR, or expressions within IF and CASE statements, from the most likely to be true to the least likely to be true.
Since 10g it has been possible to specify the level of optimization that occurs during compilation (whether compilation to PL/SQL bytecode or to native, machine code). The optimization level is set using parameter PLSQL_OPTIMIZE_LEVEL. To find the current value for the session:
SHOW PARAMETER PLSQL_OPTIMIZE_LEVEL
To set the current value for the session:
ALTER SESSION SET PLSQL_OPTIMIZE_LEVEL = <n>;
To recompile a previously compiled procedure or function after changing the session value:
ALTER ( PROCEDURE | FUNCTION )
To determine the value associated with a previously compiled procedure or function, select PLSQL_OPTIMIZE_LEVEL from view:
The optimizations performed for different settings of PLSQL_OPTIMIZE_LEVEL (default value 2) are not discussed in any detail, but in 11g:
0 – Oracle 9i compatibility level, with very little optimization and no side effects
1 – Removes unnecessary computations and exceptions, but does not reorder code
2 – Removes invariants from loops, transforms FOR loops to BULK_COLLECT with a LIMIT of 100
3 – Inlines subroutines
By default when a PL/SQL procedure or function is compiled, it is compiled to PL/SQL bytecode, and it is executed by the PL/SQL interpreter engine. For compute-intensive routines a performance gain of perhaps (5-15)% can be obtained by compiling the routine into native, machine code instead. In 11g, native compilation has been made very easy. The compilation mode is determined by the value of parameter PLSQL_CODE_TYPE, which has values of INTERPRETED (default) or NATIVE.
To find the current value for the session:
SHOW PARAMETER PLSQL_CODE_TYPE
To set the current value for the session:
ALTER SESSION SET PLSQL_CODE_TYPE = ( INTERPRETED | NATIVE );
The performance of PL/SQL code for compute-intensive tasks has increased dramatically – by a factor of up to 100 – since PL/SQL was first launched. However, a Java program called from a PL/SQL wrapper will still outperform a native PL/SQL compilation of code that makes the best used of performant data types, such as SIMPLE_INTEGER, by a factor of perhaps 5.
Inlining is a common compiler optimization technique in which a call to a subroutine is replaced by the code contained within the subroutine. Some inlining is done automatically when the value of PLSQL_OPTIMIZE_LEVEL is set to 3. Inlining can be forced by included the manual directive:
PRAGMA_INLINE ( <function name>, ‘YES’ )
before a call to a PL/SQL procedure or function. Performance gains tend to be modest, at around the 50% mark.
One of the new features in 11g is a function cache for routines or statements that include calls to PL/SQL functions. The
clause is added to the function definition. A mapping between the function’s input parameters and the function’s return value is held in memory. This mapping is looked up on each function call, and the mapped value is used if available; otherwise, the function is called in the normal manner, and the input parameter values and return value are added to the mapping. Function caching is only of value if the same function is likely to be called repeatedly with the same parameter values. Clearly, the function must be deterministic and not depend on side effects, such as accessing a changing global package variable or a random number generator. A cached function can also access database tables, and while the table data should be relatively stable given the frequency of function calls it can change, as the function cache will be automatically re-initialized if the:
RELIES_ON ( <table name> )
clause is included in the function definition. Function caching can give a performance gain of up to a factor of about 5.
|Identifying Performance Issues|
While some performance optimizations will usually be included within the code of PL/SQL routines when they are first created, when it gets to the stress testing phase of a project we need to know if PL/SQL code execution is going make a significant demand on system resources, and, if so, which PL/SQL routines are the worst offenders, and which statements within these routines are the most resource hungry.
We can determine the overall percentage of system resources devoted to PL/SQL execution (excluding the execution of embedded SQL) by selecting “DM time” and “PL/SQL execution elapsed time” from view V$SYS_TIME_MODEL.
The execution time for individual PL/SQL routines can be obtained by selecting PLSQL_EXEC_TIME from view V$SQL.
Within a PL/SQL routine we can determine the time taken by individual statements by using DBMS_PROFILER:
v_result BINARY_INTEGER; ... v_result := DBMS_PROFILER.start_profiler( '<log entry name>' ); <code> v_result := DBMS_PROFILER.stop_profiler;
and then selecting the results from tables PLSQL_PROFILER_RUNS and PLSQL_PROFILER_UNITS. This profiling functionality is build into most IDEs that support PL/SQL development.
PL/SQL Statement Context and Components
Oracle 11g comes with an enhanced hierarchical profiler, DBMS_HPROF, that provides more context about the execution of each PL/SQL statement, such as the calling routine and the contribution to the elapsed time that comes from subroutines. This profiling functionality is also build into most IDEs that support PL/SQL development, such as SQL Developer.
|Everything Oracle||Home||Everything Oracle|
Copyright © 2007-2015 PWG Consulting, All Rights Reserved