Download Hyper-Extended Oracle Performance Monitor 3 (Beta)

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Tandem Computers wikipedia , lookup

Concurrency control wikipedia , lookup

Entity–attribute–value model wikipedia , lookup

Btrieve wikipedia , lookup

Ingres (database) wikipedia , lookup

Database wikipedia , lookup

Functional Database Model wikipedia , lookup

Microsoft Jet Database Engine wikipedia , lookup

Clusterpoint wikipedia , lookup

Extensible Storage Engine wikipedia , lookup

Open Database Connectivity wikipedia , lookup

Oracle Database wikipedia , lookup

Microsoft SQL Server wikipedia , lookup

Database model wikipedia , lookup

Relational model wikipedia , lookup

SQL wikipedia , lookup

PL/SQL wikipedia , lookup

Transcript
Hyper-Extended Oracle Performance Monitor 3 (Beta)
Requirements:
Windows 2000-Vista, MDAC/ADO 2.8, VB 6 runtime, connection to the Oracle database by
database SID, Excel 2000-2007, Microsoft Grid control, TIMED_STATISTICS set to TRUE.
Most features require DBA permissions (SYSADM or SYS).
Caution:
This program must not run on the Oracle database server (expect to be looking for a different job
if you do not heed this warning), nor should the logging features run on a computer that will be
simultaneously operating in Visual (the logging will skew the accuracy of the data being logged).
The logging and reporting phase of the program is capable of pegging a 3.8GHz CPU at 100%
utilization for up to an hour, when all reporting options are selected. The Oracle database server
will sustain a slight performance penalty during the logging and reporting phase – no information
is written to the Oracle database.
This software program contains known bugs/feature limitations. This software is authorized for
use by employees of companies to analyze Visual databases containing data owned by their
company.
Page 1 of 35
Hyper-Extended Oracle Performance Monitor 3 (Beta)
Description:
This program may only be used with Visual Manufacturing/Enterprise Oracle databases.
Provides Visual Manufacturing/Enterprise Oracle database extended monitoring and reporting.
To enable extended monitoring, click the Start Smart Logging button and select the detail to
include in the log. To disable extended monitoring, click the Stop Logging button. To generate
an Excel based report from the logged performance data: Stop logging, select the detail to
include in the report, click the Report button. Give the program sufficient time to build the
report, which may require one minute to an hour depending on report options selected; once the
report is built, the report will be assigned a file name in Excel. Logging results may be
graphically reviewed using the Review button. Additionally, real time graphical monitoring,
with output to log files every 30 minutes, may be enabled by selecting Real Time Performance.
DBMS Xplans with optional trace data may be generated for raw SQL statements and bind
variables which are extracted from raw trace files. 10046 trace files offer extensive data and
performance analysis opportunities. The SQL Monitor provides a quick view of high load SQL
statements with execution plans and bind variable definitions.
No information is written to the Oracle database. A self contained Microsoft Access compatible
database is built with the file name C:\OracleLog\OracleStatus.mdb. Previous versions of the
Access compatible database are renamed based on the file date of the previous database when a
new database is created. All other logging data generated by the program is saved in plain text
format in the same folder.
Real Time Performance:
Various performance monitoring views are present in an Oracle database, but it is difficult to
collect and properly time scope the various statistics to show conclusive impact of excessive hard
parsing, improper execution plans, ad-hoc SQL without proper joins, improper sizing of the SGA
and UGA, excessive commit rate, etc. The Real Time Performance monitoring provides a birdeye view of the database, which may serve as the starting point for drill-down, in-depth analysis
of performance problems. Sid and Serial numbers for sessions are provided to assist in the
activation of 10046 trace files for specific sessions. See the On-Screen Real-Time Monitoring
section below for additional information.
DBMS Xplan and Trace:
Oracle 9i, 10g, and 11g provide an enhanced method by which the expected and actual execution
plans of specified SQL statements may be retrieved. Due to the cost based optimizer’s default
predicted selectivity of bind variables in predicates being set to 5%, the potential for bind
variable peeking, and the presence of histograms, the actual execution plan may be dependent on
whether constants are specified rather than bind variables, or even based on the value of the bind
variable values. The execution plan developed during the initial hard parse is reused during
future executions of the same SQL statement with different bind variable values. While the
execution plan may have been optimal for the original bind variable values, the plan may not be
appropriate as the bind variable values change. The DBMS Xplan and Trace functionality in this
program utilizes the DBMS_XPLAN package to generate execution plans for SQL statements
with supplied bind variable values. Once a SQL statement has been identified as resource
intensive, per analysis of a 10046 trace file, the raw SQL statement and bind variable definitions
from the output of the 10046 trace file analysis may be used to determine the execution plan,
Page 2 of 35
Hyper-Extended Oracle Performance Monitor 3 (Beta)
generate a 10053 trace file to determine why the execution plan was selected, and explore the use
of Oracle hints to alter various optimizer behavior that caused the poor execution plan. The
DBMS_XPLAN package is called with STATISTICS_LEVEL set to ALL, which may slightly
increase execution time, while improving timing accuracy in the execution plan.
Type or paste the SQL statement into the SQL Statement to Execute box. The SQL statement
can be copied directly from a 10046 trace file, from the output of an analyzed 10046 trace file, or
may be typed manually. Bind variable place holders are typically indicated in the SQL statement
by a space, and colon followed by a number. Oracle generated bind variables, caused by
CURSOR_SHARING = FORCE/SIMILAR, are indicated in SQL statements by a space,
followed by :"SYS_B_##" where ## is a number. Bind variables that appear in the ORDER BY
clause need to be removed and replaced by their constant equivalent (3 in the above screenshot).
Type or paste the bind variable definitions from the 10046 trace file (generated at level 4 or 12),
or from the output of an analyzed 10046 trace file into the Bind Variable Definitions from
10046 Trace File box. The bind variable definitions must be translated into a simplified list of
bind variable types and values for execution. Click the Translate Bind variables button to
automatically translate the bind variable definitions into a simplified list.
The Translated Bind Variable Definitions list is used for actual execution of the SQL
statement. The list may be generated automatically by translating bind variable definitions, or by
manually entering each bind variable’s type, max length, scale, precision, and value.
Oracle’s DBMS_XPLAN package supports four levels of detail, with Typical and All being the
most commonly used. The typical detail level only shows what Oracle believed would happen
Page 3 of 35
Hyper-Extended Oracle Performance Monitor 3 (Beta)
based on the statistics that were available, and the predicted cardinality at each stage of the
execution plan. The All detail level compares the expected plan (from a 10053 trace) with the
actual row source plan (from a 10046 trace) – this indicates how accurate Oracle’s cost based
optimizer predicted the cardinality in the execution plan.
The above shows the DBMS_XPLAN output of a SQL statement at the Typical detail level – the
values shown are estimates based on predicted cardinality at each step. Operations at the greatest
level of indent are performed first, and when two operations near each other are at the same level
of indent, the operations are performed in a top-down manner. The Predicate information
section lists the restrictions in effect at each of the indicated operations. For instance, at ID 4
there is an INDEX RANGE SCAN operation on the X_PAYABLE_2 index that is predicted to
return one row – the predicate information indicates that the method is access (pre-restricting the
data returned without having to retrieve all data) by using only the ENTITY_ID column – this is
logical if there are 50+ unique ENTITY_ID values. It was predicted that this SQL statement
would require 0.01 seconds to execute, with no temp tablespace usage (since the column was not
included in the DBMS_XPLAN output).
The above shows the DBMS_XPLAN output of the same SQL statement at the All detail level –
the values shown are based on the actual cardinality at each step. Taking another look at ID 4 in
the plan, Oracle predicted that only one row would be returned from the X_PAYABLE_2 index,
yet 151,000 rows were returned. The SQL statement required 0.81 seconds to execute, rather
than the 0.01 seconds that was predicted. If this SQL statement is executed frequently, the 0.81
seconds per execution will quickly grow to become a significant duration of time.
Page 4 of 35
Hyper-Extended Oracle Performance Monitor 3 (Beta)
Correcting this problem requires Oracle to avoid using the X_PAYABLE_2 index – a couple
methods with be explored below.
In addition to displaying the DBMS_XPLAN output, it is possible to simultaneously generate
several types of trace files that may be useful for further analysis. 10046 Trace with Binds
(level 4 trace) causes Oracle to output the SQL statement, bind variable values, and row source
information that indicates the actual execution plan statistics. 10046 Trace with Waits (level 8
trace) causes Oracle to output the SQL statement, wait events from instrumented portions of
Oracle’s code (cause and effect of the delays), and row source information that indicates the
actual execution plan statistics. 10046 Trace with Waits and Binds (level 12 trace) allows
Oracle to output maximum detail with both bind variable values and wait events. Understanding
raw 10046 trace file output takes a bit of effort, as the bind variables, parse, execute, fetch, wait,
and stat lines for a single SQL statement will likely be separated by similar output for other SQL
statements (see “Optimizing Oracle Performance” by Cary Millsap). Nothing will appear in a
10053 trace file unless Oracle performs a hard parse for the query. 10053 trace files are helpful
for determining how Oracle developed the execution plan for a particular SQL statement. A
10053 trace file at level 1 provides the greatest amount of detail, while a 10053 trace at level 2
provides slightly less information. Understanding the contents of a 10053 trace file requires a
great deal of practice (see “Cost-Based Oracle Fundamentals” by Jonathan Lewis). 10032 and
10033 trace files provide insight into the proper setting of the SORT_AREA_SIZE for specific
SQL statements. 10104 trace files are helpful for the proper setting of HASH_AREA_SIZE. All
trace file output generated for a SQL statement is directed to the same trace file, which is given
an identifier that indicates the trace file date and time. Trace files are generated in the udump
folder on the database server.
Oracle’s predicted best execution plan may
be occasionally not be the optimal execution
plan. Hints provide an efficient method to
improve the execution plan. Hints, such as
the NO_INDEX hint, are helpful to prevent a
specific index from being used. For instance,
/*+ NO_INDEX(PAYABLE X_PAYABLE_2) */
prevents Oracle from using the
X_PAYABLE_2 index, which will either
force a different index to be used or a full
tablescan for data retrieval.
Page 5 of 35
Hyper-Extended Oracle Performance Monitor 3 (Beta)
Configure Data Change Logging:
The Data Change Logging feature is
used to build SQL*Plus compatible
scripts that create logging tables,
triggers to populate the logging tables,
and optionally a database schema and
tablespace for storing the logged data.
Table Owner specifies the user
schema where the table to be logged
is located, which will usually be the
SYSADM schema. Selecting a table
owner automatically retrieves all
tables in the selected schema.
Table Name indicates the table that
will be monitored for changes.
Selecting a table automatically
retrieves the column definitions for
the table.
Schema to Contain Data Log specifies the user/schema that will hold all logging table, this
value defaults to DATALOG. This user/schema should not be the SYSADM schema.
Comments in the generated script indicate how to create the DATALOG user, and optionally a
separate tablespace to contain the logging data.
In the column definitions, the PK column indicates whether or not the column is part of the
table’s primary key. By default, all primary key columns are set to log new values, with no
trigger on change. Primary key columns are special, as they are columns that hold values that
should never change for a particular row. As such, it is only necessary to log the new values for
these columns – the logging column will be populated, even when a row is deleted in the table.
Trigger on Change indicates whether or not a change in the column’s data will cause the trigger
to activate and log the specified values. Visual has a tendency to update columns to the same
value, which will cause the trigger to fire unnecessarily, so an IF statement is added to the trigger
to verify that the data contained in the column actually changed.
Log Values indicates what columns will be included in the logging table. Both New and Old
specifies that before and after snapshots of the column data should be stored (in the OLD_ and
NEW_ prefixed column name). New Values indicates that only the after snapshot of the column
data should be stored (in the NEW_ prefixed column name). Old Values indicates that only the
before snapshot of the column data should be stored (in the OLD_ prefixed column name).
Transaction type: Update (U), Insert (I), and Delete (D) is indicated for each transaction, as well
as the user account generating the transaction and the system time of the transaction.
Page 6 of 35
Hyper-Extended Oracle Performance Monitor 3 (Beta)
Update transactions (changing the value of table data) are able to log both new (replacement)
values and old (previous) values; as such, updates will populate both NEW_ and OLD_ columns
in the logging table. Delete transactions (removing a table row) are only able to log old
(previous) values, meaning that only the OLD_ columns in the logging table will contain data for
a transaction. Insert transactions (adding a table row) are only able to log new values, meaning
that only the NEW_ columns in the logging table with contain data for a transaction.
Changing the value in the Trigger on Change column: use the spacebar or left mouse button to
toggle between Yes and blank (No). Y or + can be used to specify Yes, N or – can be used to
specify No.
Changing the value in the Log Values column: use the spacebar or left mouse button to cycle
through the four options (Both Old and New, New Values, Old Values, New Values and Delete,
Old Values and Delete, and not logged). Y, A, B, or + can be used to specify that both values
should be logged. N can be used to specify that new values should be logged, O can be used to
specify that old values should be logged, D can be used to log new values should be captured on
insert/update and old value on delete in a single column, I can be used to log old values on
delete/update and new values on insert in a single column, and - can be used to specify that the
column will not be logged.
The Create Data Log Script button causes the program to evaluate the selections and build a
SQL script file to record the selected items. Edit the generated script to contain the user account
passwords (search for MY_PASSWORD), save the file, and process it using SQL*Plus (follow
the directions in the script file). For example:
SQL> @C:\OracleLog\TRIGGER_DATALOG_VMFG_HPM_LOG_LABOR_TICKET.SQL
Additional commented-out IF checks are placed in the trigger if the selected table contains either
BASE_ID or WORKORDER_BASE_ID as part of the table’s primary key. This permits a quick
method of modifying the generated trigger to log only when changes are made to engineering
masters. If it is desired to log only when changes are made to engineering masters, remove the -entries found at the start of the IF line, and the corresponding -- entries in front of the END IF.
Important:
Always test new logging triggers in a test database, and verify that Visual continues to operate as
expected during update, insert, and delete operations. Logging into the test database with the
Hyper-Extended Oracle Performance Monitor program generates scripts that are specific to that
test database.
Page 7 of 35
Hyper-Extended Oracle Performance Monitor 3 (Beta)
Advanced Init Parameters:
Keyword Search:
The Search feature performs a keyword search
on statistics, wait events, initialization
parameters, hints, and other topic categories.
Such a search may reveal relationships between
wait events, initialization parameters, and
workarounds/solutions for various problems.
SQL Monitor:
The SQL Monitor feature examines the shared
pool for potentially high load SQL statements.
Clicking on a row in the grid displays the SQL
statement, while double-clicking a row displays
the execution plan for all child cursors, the
reason why the child cursor was created, and the
defined bind variable types. Placing a check in
Log File writes the grid contents to a file on the
next refresh, and if SQL Statements is also
checked, the SQL statement will also be written
to the log file. CPU Time and Elapsed time are
only available on Oracle 9i R2 and above.
Page 8 of 35
Hyper-Extended Oracle Performance Monitor 3 (Beta)
The Drag & Drop Extended Trace feature performs extensive analysis of 10046 trace files that
were captured at levels 4, 8, and 12. A minimum of four time coordinated analysis files are
generated for each trace file that is analyzed.
Several options are available to determine
what type of data is retrieved from the trace
file. If execution plans are not needed,
disable that feature to significantly improve
trace file analysis speed.
The wait events output is sent to Microsoft
Excel. The Trace Time column indicates
the time offset from the start of the trace
file at which the wait event appeared in the
trace file. The Wait Time column indicates
the duration of the wait event in 1/1000th of
a second. The Wait Event column indicates
the type of wait – a description of the wait
event appears at the bottom of the wait
events output. The wait events Raw Details
displays the wait event line from the trace
without interpretation. The Oracle Cursor Num column displays an identifier that can be used to
relate the wait event back to a specific SQL statement in the other analysis files.
The SQL Parse Order analysis file lists
each SQL statement in the order in which
the application requested a parse call.
Parse, execute, fetch, bind variables, and
row source execution plan for each SQL
statement is listed together, along with a
summary per SQL statement parsed.
Cursor number is retrieved directly from
the 10046 trace file, and may not be
directly dependent on application coding.
Ver number indicates the number of cursors that were parsed at the Cursor number through this
point in the trace file. Parse at indicates the time offset in seconds from the start of the trace file
until the parse call was issued. TD Prev indicates the time difference between the current SQL
statement and the previous parse call. EXECs indicates the total number of execution calls from
the application for the current SQL statement. FETCHs indicates the total number of number of
fetch requests to retrieve rows from the database for the current SQL statement. CPU S
Page 9 of 35
Hyper-Extended Oracle Performance Monitor 3 (Beta)
indicates the number of seconds of database server CPU time required for the SQL statement.
CLOCK S indicates the elapsed time for the SQL statement – the elapsed time will frequently
differ from the CPU time, unless the server’s CPU is the only bottleneck in the database.
ROWS indicates the number of rows retrieved or affected by the SQL statement. PHY RD
BLKs indicates the number of blocks that had to be read from disk to satisfy the SQL statement.
CON RD BLKs (Mem) indicates the number of blocks read from memory in consistent read
mode – a logical read, which is only roughly 100 times faster than a physical read due to
overhead related to consistent reads. CUR RD BLKs (Mem) indicates the number of blocks
read in current mode, which is considerably less expensive that a consistent read. SHARED
POOL MISS indicates the number of times a hard parse was required – if during an execute or
fetch call, such parses are expensive and may be partially caused by setting
CURSOR_SHARING to SIMILAR.
Parse to Exec/Fetch indicates the time difference in seconds between the parse call for the SQL
statement and the Exec or Fetch call. CPU Time indicates the number of seconds of the server’s
CPU time that was required by the SQL statement for the particular parse, exec, or fetch call.
The Elapsed Time indicates the wall clock, actual amount of time required for the particular
parse, exec, or fetch call. Rows Affected/Retrieved indicates the number of rows impacted or
retrieved by the particular parse, exec, or fetch call. Blks from Buff indicates the number of
logical block reads from memory, which includes both consistent read and current modes. Blks
from Disk indicates the number of blocks obtained by a physical read request from the hard
drive. Bind Variables: indicates bind variable definitions supplied for the exec call. Trigger
Code/Application Code indicates the depth of the SQL statement. Those SQL statements with
dep=0 are executed by the application, while those with dep=1, dep=2, etc. are executed by
triggers, space management calls, and cost based optimizer calls.
The SQL Execution order analysis file outputs
the SQL statements and bind variable values in
the order of execution as the trace file is read.
Minimal diagnostic data is provided in this type
of analysis file, although it may help to
determine the looping structure of SQL
statements that are executed, where the output
of one SQL statement is fed in as the input for
another SQL statement.
The Grouping Similar SQL statements analysis
file attempts to group together SQL statements
that are parsed multiple times, rather than only
being parsed once and executed many times.
This type of analysis file accumulates the
statistics for the similar SQL statements,
indicating the percentage of the total execution
time each group represents. While individual
executions may total only 0.01 seconds for a SQL
Page 10 of 35
Hyper-Extended Oracle Performance Monitor 3 (Beta)
statement, if the SQL statement is executed 30,000 times, the SQL statement should probably be
analyzed to determine if its execution plan is as efficient as possible. This analysis file allows
one to drill down to the root cause of the problematic SQL statement that represents the greatest
percentage of the total run time. Wait events are summarized at the beginning of the file, as well
as with each group of similar SQL statements.
The Table and Index Lookup for Waits option attempts to determine object names identified in
the wait events. Omit SQLNet Idle Events option eliminates the various SQL*Net wait events
in the Excel file. The Omit Bind Variable Retrieval option prevents the reporting of bind
variable values from the trace. Omit Trigger Code (Dep > 0) prevents reporting of SQL
statements with dep greater than 0. Gen VB Code is experimental only – causes the program to
write Visual Basic code to reproduce the trace file. TKPROF causes the program to submit the
trace file to TKPROF for additional analysis.
Notes for Oracle databases running on Unix and Linux: the 10046 extended trace file parser
requires a CRLF combination at the end of each line in the log, which is standard on
DOS/Windows platforms but not on Unix/Linux platforms. Either transfer the trace file from the
server using ASCII mode, which will convert the end of line marker to CRLF, or open the trace
file using Wordpad and save as Text Document – MS-DOS Format.
Page 11 of 35
Hyper-Extended Oracle Performance Monitor 3 (Beta)
Configure Session Tracing:
Session tracing is an important component of database
tuning. Enabling and disabling various types of traces is
possible within the Hyper-Extended Performance
Monitor. Activated tracing of sessions is automatically
disabled when the Hyper-Extended Performance Monitor
program is closed.
10046 Trace: Performance specific to each SQL
statement executed.
10053 Trace: Cost Based Optimizer decisions during
hard parses.
10032 and 10033 Trace: Sort related trace files.
10104 Trace: Hash join related trace files.
Advanced Init Parameters:
The Advanced Init Parameter feature retrieves
all system level and session level normal as well
as hidden parameters in the Oracle database
instance. As a general rule, never modify the
value of a hidden parameter (those that begin
with _). A brief description of each parameter is
provided, and in many cases a longer description
is presented at the bottom of the window.
Session specific parameters are displayed after
the system parameters, with the session’s SID
displayed in the Type column in the table. All
parameters are simultaneously output to a file in
the C:\Oraclelog folder, with a name such as
“All Initialization Parameters 200710021321.txt”.
Most of the hidden parameters are only visible to the SYS user. As such, the Hyper-Extended
Oracle Performance Monitor will prompt for the SYS password. If the SYS password is not
provided, only the non-hidden parameters will be presented. Oracle has several hundred
initialization parameters, many of which are hidden (those that begin with _ ) and should not be
modified without the guidance of Oracle support. Following the list of system wide parameters
are query optimization parameters which are specific to individual sessions.
Exclude Hidden Params: Prevents parameters that begin with _ from appearing in the
parameter list.
Exclude Default Values: Lists only those parameters that have a non-default value.
Exclude Unmodifiable Params: Lists only those parameters that may be adjusted.
Exclude System Params: Lists only session specific query optimization parameters.
Exclude Session Params: Lists only system wide initialization parameters.
Note: a text file is generated in the C:\OracleLog folder that reproduces the table contents.
Page 12 of 35
Hyper-Extended Oracle Performance Monitor 3 (Beta)
Report Option Descriptions:
Data Dictionary, Tables and Columns: Lists all tables and columns (fields), column type,
maximum data length, decimal precision, whether the column will accept null values, column
position, tablespace name, and any comments associated with the table/column.
Data Dictionary, Indexes: Lists all indexes, table referenced by the index, columns referenced by
the index, index direction (ascending or descending), whether the indexed columns are unique,
whether the index entries are compressed, and the tablespace name that contains the indexes.
Data Dictionary, Views: Lists all viewed (saved SQL statements) contained in the database, and
the SQL statement used to generate the view.
Data Dictionary, Triggers: Lists all triggers contained in the database, whether the trigger fires
before or after a change, the type of event that causes the trigger to fire (update, insert, delete),
table name referenced by the trigger, synonyms used by the trigger for the old and new values,
trigger status, and trigger code. A second tab is created that lists the trigger names, tables
referenced, and the columns that cause the trigger to fire.
Data Dictionary, Function and Procedure Code: Lists all functions, packages, and procedures, as
well as the code for those objects.
Data Dictionary, Constraints: Lists all table constraints, the tables that the constraints reference,
the columns referenced by the constraints, and whether the constraint points to a primary key
constraint in another table.
Data Dictionary, Foreign Keys: Lists all primary key constraints, the corresponding foreign key
constraint name, the table name and column name of the primary key constraint, and the table
name and column name of the foreign key constraint. This item may be used to determine how
various tables should be joined together when information is pulled from multiple tables.
Data Dictionary, Synonyms: Lists all object synonyms contained in the database and the real
object that the synonyms point to.
Data Dictionary, Database Level User Roles Granted: Lists all database users and groups, and
the database level permissions granted (ALTER SESSION, CREATE DATABASE LINK,
CREATE SEQUENCE, DROP ANY TABLE, etc.). These roles in the newer versions of
Oracle have been changed to increase security levels. Some of the restrictions may need to be
relaxed to support the older versions of Visual.
Data Dictionary, Database Table Permissions: Lists all objects in the database, the database user
or group receiving the permission, the permission (SELECT, DELETE, ALTER, UPDATE,
etc.), who granted the permission, and whether the grantee is able to then grant that permission
to another database user or group.
Data Dictionary, Un-Indexed Foreign Keys: Lists all tables containing foreign keys, the columns
specified by the foreign keys, and any indexes on that table that support the foreign keys. UnIndexed foreign keys limit system concurrency. Updating the primary key referred to by the
foreign key will result in a full table lock on the table containing the foreign key, if there is no
supporting index on the table containing the foreign key. Full table locks, in this case, may be
caused by programs that always update all columns, even if the values in those columns are
unchanged. Excessive indexes also causes system concurrency problems. For every row
updated (likely one I/O), at least three I/Os are required per indexed column that is modified.
Data Dictionary, Invalid Objects: Lists those objects that Oracle has marked as invalid for one
reason or another. Also lists any errors for object definitions.
System Check, Data Files Likely Fragmented: Added two additional potential causes of
fragmentation: non-standard extent sizes, and cases where sequential extents are not contained
Page 13 of 35
Hyper-Extended Oracle Performance Monitor 3 (Beta)
sequentially in the tablespace (this may not indicate a problem, except for full table/index
scans).
System Check, Tables and Indexes to Analyze: Now included objects owned by SYS and
SYSTEM. For Oracle 8 and 8i, it is not recommended to analyze objects owned by SYS. For
10g (and possibly 9i) statistics for fixed objects should be collected only once, when the
database reaches typical work load: EXEC
DBMS_STATS.GATHER_FIXED_OBJECTS_STATS(‘ALL’) For 10g (and possibly 9i)
dictionary stats should be collected at least once a week, along with the other table and index
statistics: EXEC DBMS_STATS.GATHER_SCHEMA_STATS(‘SYS’)
System Check, Backup Log: Lists the data file name, backup stamp number, checkpoint change
number, checkpoint time, number of data blocks and byte size, total backup time (hours), and
an approximation of disk MB/s of the backup.
System Check, Current Redo Statistics: Lists the redo log groups, files in the group, whether or
not the group is currently active, whether or not the group has been archived, size of the files in
bytes, first change number, and date/time of the first entry in the redo log. Additionally, the
history of the redo log file switches is listed (ideally, log file should switch every 20 to 30
minutes). The history contains the following items: First change number, next change number,
time range that the log file was active (shifted by one row in the spreadsheet), and the number
of hours that the log file was active.
System Check, Current Row Cache Hit Detail: Shows detail of individual components of the
dictionary cache.
System Check, Current Contents of the Shared Pool: Lists all objects contained in the shared
pool, sorted by name. This is helpful when determining whether or not applications are
efficiently using bind variables: having 220 versions of this SQL statement in the shared pool
is not a good thing: “SELECT TYPE, BITS, BITS_LENGTH FROM
REQUIREMENT_BINARY REQUIREMENT_BINARY WHERE WORKORDER_TYPE =
:V00001 AND WORKORDER_BASE_ID = :V00002 AND WORKORDER_LOT_ID =
:V00003 AND WORKORDER_SPLIT_ID = :V00004 AND WORKORDER_SUB_ID =
:V00005 AND OPERATION_SEQ_NO = 777 AND PIECE_NO = 50 AND TYPE =
:V00008”, where only the number following the OPERATION_SEQ_NO and PIECE_NO
change from one version to the next. This report also indicates the amount of sharable memory
required, number of loads, number of executions, number of locks, number of pins, and
whether or not the object is marked to be kept in the shared pool at all times.
System Check, Poorly Performing SQL Statements: Now also lists the SQL statements that are
the worst offenders of logical IOs per execution.
System Check, Locked Objects: Identifies those objects that are held in the locked state for
extended periods of time, possibly due to a computer crashing before committing/rolling back,
or a user not clicking “Yes” that a certain report printed correctly.
System Check, Check for Default Passwords: This does not attempt to hack into the database.
Rather, it compares the user name and password hashes with roughly 220 known pair
combinations, and reports where matching pairs are found in the database. Passwords
identified should be changed as soon as possible – some viruses, worms, and hacking tools
look for these default passwords.
System Check, Redundant Cursors Open for Sessions: Helps to identify problems in programs
where multiple cursors are kept open for the same SQL statement, which possibly causes the
program to eventually crash due to hitting an upper limit for the number of cursors.
Page 14 of 35
Hyper-Extended Oracle Performance Monitor 3 (Beta)
System Check, Segment Usage Statistics: Retrieves a list of all objects in the database, and all
wait types related to those objects, and usage statistics.
General Session Statistics, Pivot Table on Specific Statistic: Generates a pivot table that lists all
sessions down the page, and all time periods across, with the specified database counter in the
field area of the report. Delta values (changes from the previous value) or actual values may
be displayed in the field area. Statistics include: commit cleanouts, consistent gets (logical IO),
CPU used by this session, DBWR lru scans, enqueue deadlocks, leaf node splits, physical
writes, and many others.
Hyper-Extended Session Trace: Added latch contention statistics, SQL*Net message from client
and SQL*Net message to client (both indicate where Oracle is waiting on the client computer
to catch up), temp file activity, approximate data file MB/s.
Extended Trace File, SQL Code to Perform an Extended Trace: Lists most of the code necessary
to activate an extended trace on a session, deactivate the trace, and where to find the resulting
trace file. Note: need to query V$SESSION to obtain the SID and SERIAL# for a session.
Extended Trace File, Drag & Drop Extended Trace File: Drag an extended trace file that contains
wait events to this area. A quick analysis of the trace file will be performed that extracts the
SQL statements, plan that Oracle used to execute the SQL statement, wait events associated
with each SQL statement, and a summary at the bottom of the report that indicates how much
time is the database server responsible for, and how much is the client responsible for.
Extended Trace File, Perform Table and Index Lookup: Attempts to determine the actual index,
table, or other object that causes each wait event. A summary is provided that indicates the
amount of time waited for each object. Selecting this item may cause the CPU in the database
server to spike, use cautiously.
Extended Trace File, Perform TKPROF: Send the trace file into TKPROF for a quick analysis of
the SQL statements without reporting wait events.
Oracle 10G Specific – Several that provide access to the automatic tuning features in Oracle 10g.
Instance/Database Information: Pulled directly from the live Oracle database.
Initialization Parameters: Pulled directly from the live Oracle database, all init.ora parameters.
Tablespaces and Data Files: Lists all tablespaces and the operating system file that compose the
tablespaces.
License Information: Lists the high water mark of licenses in the database.
Oracle Installed Options: Lists all additional cost options that that installed in the database.
Data Files, Table Sizes by Buffer Pool: Lists all tables and their sizes at the end of the logging
session.
Data Files, Index Sizes by Buffer Pool: Lists all indexes and their sizes at the end of the logging
session.
Data Files, Usage Statistics by Minute: Identifies when the size of tables and/or indexes changed,
grouped by the specified number of minutes, during the logging period. Numbers provided are
delta (change) values from the previous logging period.
Data Files, Usage Statistics by ¼ Hour: Identifies when the size of tables and/or indexes
changed, grouped by 15 minute intervals, during the logging period. Numbers provided are
delta (change) values from the previous logging period.
Data Files, Usage Statistics by Hour: Identifies when the size of tables and/or indexes changed,
grouped by each hour, during the logging period. Numbers provided are delta (change) values
from the previous logging period.
Page 15 of 35
Hyper-Extended Oracle Performance Monitor 3 (Beta)
Data Files, Usage Statistics Overall: Compares the starting and ending size of tables and indexes
during the logging period. Numbers provided are delta (change) values from the previous
logging period.
Rollback Segments, Segment List: Lists all rollback segments, status, number of extents, file size
in bytes, optimal size in bytes, high water mark (largest size), number of shrinks back to the
optimal size, and the number of extends. Numbers provided are those in effect at the end of
logging.
Rollback Segments, Usage Statistics by Minute: Identifies activity in the rollback (undo)
segments, grouped by the specified number of minutes, during the logging period. Numbers
provided are delta (change) values from the previous logging period. Statistics for extents,
rollback segment size, writes (in bytes), reads (in bytes), waits, high water mark, shrinks,
wraps, extends, average size active, and the currently active segment and block number.
Rollback Segments, Usage Statistics by ¼ Hour: Identifies activity in the rollback (undo)
segments, grouped by 15 minute intervals, during the logging period. Numbers provided are
delta (change) values from the previous logging period. Statistics for extents, rollback segment
size, writes (in bytes), reads (in bytes), waits, high water mark, shrinks, wraps, extends,
average size active, and the currently active segment and block number.
Rollback Segments, Usage Statistics by Hour: Identifies activity in the rollback (undo) segments,
grouped by hour, during the logging period. Numbers provided are delta (change) values from
the previous logging period. Statistics for extents, rollback segment size, writes (in bytes),
reads (in bytes), waits, high water mark, shrinks, wraps, extends, average size active, and the
currently active segment and block number.
Rollback Segments, Usage Statistics Overall: Compares the starting and ending statistics during
the logging period.
Latches, Latch Counters List: Lists all latches, number of gets, misses, sleeps, immediate gets,
immediate misses, waiters woken, waiters holding latch, spin counters, and the first four levels
of sleeps. Numbers provided are those in effect at the end of logging.
Latches, Usage Statistics by Minute: Identifies all latch activity, grouped by the specified
number of minutes, during the logging period. Numbers provided are delta (change) values
from the previous logging period. Statistics for number of gets, misses, sleeps, immediate gets,
immediate misses, waiters woken, waiters holding latch, spin counters, and the first four levels
of sleeps.
Latches, Usage Statistics by ¼ Hour: Identifies all latch activity, grouped by 15 minute intervals,
during the logging period. Numbers provided are delta (change) values from the previous
logging period. Statistics for number of gets, misses, sleeps, immediate gets, immediate
misses, waiters woken, waiters holding latch, spin counters, and the first four levels of sleeps.
Latches, Usage Statistics by Hour: Identifies all latch activity, grouped by hour, during the
logging period. Numbers provided are delta (change) values from the previous logging period.
Statistics for number of gets, misses, sleeps, immediate gets, immediate misses, waiters woken,
waiters holding latch, spin counters, and the first four levels of sleeps.
Latches, Usage Statistics Overall: Compares the starting and ending statistics during the logging
period.
General Session Statistics: Session Wait Event List: Lists all session level wait events during the
logging period. Includes SID, username, program, wait event name, total waits, total time outs,
time waited, and average wait (at this wait event interval). Numbers provided are delta
(change) values from the previous logging period.
Page 16 of 35
Hyper-Extended Oracle Performance Monitor 3 (Beta)
General Session Statistics, Sess Buf Hit Ratio Min: Lists session level DB buffer hit ratio for
active sessions, grouped by the specified number of minutes. Numbers provided are delta
(change) values from the previous logging period. Includes SID, username, program, session
physical reads, session DB block gets, session consistent gets, session hit ratio for the current
time period, session hit ratio for the session since the start of logging, session hit ratio for the
session since the session started, system-wide physical reads, system-wide DB block gets,
system-wide consistent gets, system-wide hit ratio for the current time period, system-wide hit
ratio for the session since the start of logging, system-wide ratio for the session since the
session started.
General Session Statistics, Sess Buf Hit Ratio by ¼ Hr: Lists session level DB buffer hit ratio for
active sessions, grouped by 15 minute time intervals. Numbers provided are delta (change)
values from the previous logging period. Includes SID, username, program, session physical
reads, session DB block gets, session consistent gets, session hit ratio for the current time
period, session hit ratio for the session since the start of logging, session hit ratio for the
session since the session started, system-wide physical reads, system-wide DB block gets,
system-wide consistent gets, system-wide hit ratio for the current time period, system-wide hit
ratio for the session since the start of logging, system-wide ratio for the session since the
session started.
General Session Statistics, Sess Buf Hit Ratio by Hour: Lists session level DB buffer hit ratio for
active sessions, grouped by hour. Numbers provided are delta (change) values from the
previous logging period. Includes SID, username, program, session physical reads, session DB
block gets, session consistent gets, session hit ratio for the current time period, session hit ratio
for the session since the start of logging, session hit ratio for the session since the session
started, system-wide physical reads, system-wide DB block gets, system-wide consistent gets,
system-wide hit ratio for the current time period, system-wide hit ratio for the session since the
start of logging, system-wide ratio for the session since the session started.
System Check, Data Files Likely Severely Fragmented: Attempts to determine data files for
tablespaces that have grown significantly since originally created – these files are likely
fragmented at the operating system level, and may affect to the performance of the database.
Includes: Tablespace name, file number, status of tablespace, checkpoint change, checkpoint
time, current file size in bytes, original file size when created, growth percent, and filename.
Numbers provided are at the start of the logging period.
System Check, Tables and Indexes Near Max Extents: Identifies tables and indexes where the
number of extents allocated is at least 75% of the maximum number of extents set for the
object. Numbers provided are from the live database.
System Check, Tables and Indexes to Analyze: Identifies tables and indexes that have not been
analyzed in the last seven days. Numbers provided are from the live database.
System Check, Tables, Indexes, Users in Wrong Tbls: Identifies tables, indexes, and users set to
use the SYS schema. Numbers provided are from the live database.
System Check, Current Memory Statistics: Lists the remaining available memory in the shared
pool and Java pool, total session memory in use, maximum amount of session memory used by
connected sessions, and latch contention ratios. Numbers provided are from the live database.
System Check, Current Cache Hit Ratios: Lists the cache hit ratios of the library and dictionary
caches, and the percentage of library cache items needing to be reparsed. Numbers provided
are from the live database.
Page 17 of 35
Hyper-Extended Oracle Performance Monitor 3 (Beta)
System Check, Poorly Performing SQL Statements: Lists the top SQL statements that are
reparsed frequently, the hard parse count, and total parse count. Numbers provided are from
the live database.
System Check, Buffer Pool Effectiveness: Lists all objects currently in the database buffer cache,
the total number of blocks for the object, and the number of those blocks in the database buffer
cache. A summary is provided at the end to show how effective the DEFAULT, KEEP, and
RECYCLE buffers are at caching database blocks. Objects specified for the KEEP pool should
never be aged out of that pool: specifying 80,000 object blocks to be in the KEEP pool, when
the keep pool size is 20,000 blocks is a bad idea. Numbers provided are from the live database.
System Check, Check Various Init Parameters: Provides suggestions for various init.ora
parameters and performance metrics to help decide the best values. Tips provided were
gathered from various performance optimization books. Numbers provided are from the live
database.
System Check, CPU Statistics: Provides various benchmarks for monitoring CPU utilization.
Note: dual CPU hyperthreaded Xeon processors provide four CPU minutes for every clock
minute. Includes top 10 sessions consuming CPU time, CPU utilization since last instance
start, CPU utilization by session, and CPU load related statistics for the database since last
instance start. Numbers provided are from the live database.
System Check, Top 5 Wait Events: Lists the top 5 items that the database has had to wait for
(data block, undo header, etc.), the top 10 system wide wait events contributing to slow
database response time, the top 30 session wait events contributing to slow database response
time. Numbers provided are from the live database. Descriptions of wait events and
suggestion investigation procedures are provided for identified waits – tips were gathered from
various performance optimization books.
Hyper-Extended Session Trace: Follows a session’s activity from the start of the log, until the
end of the logging session (or termination of the session). Select a session before creating the
report. For every minute of wait event activity, lists the delta wait events affecting the session
(event name, total waits, time waited, average time waited, and program name). Various
session level and system level delta statistics are provided to help identify index leaf splits,
sorts, CPU use, buffer cache contention, etc. System wide tablespace and data file delta
activity is provided (file name, tablespace name, physical reads, physical writes, physical block
reads, physical block writes, read time, write time, average IO time, Last IO time, min IO time
(since instance start), max IO time (since instance start), max read time (since instance start),
max write time (since instance start). System wide rollback segment delta activity is provided.
Table and index extent allocation changes are also identified, which may be a result of session
activity. A summary of activity is provided at the end of the log.
Hyper-Extended Session Trace, With Extended Detail: Includes all of the above information,
plus much more detail. For instance, if a wait event of “db file sequential read” is detected, the
extended detail may indicate that this is a result of reading SYSADM.OPERATION_SCHED,
block number 172438, with only 1 block read, and the SQL statement that likely caused the
wait event is “delete from operation_audit where schedule_id = :1
and workorder_type
= :2
and workorder_base_id = :3
and workorder_lot_id = :4
and
workorder_split_id = :5”
Page 18 of 35
Hyper-Extended Oracle Performance Monitor 3 (Beta)
Smart Logging Settings:
To decrease the database server’s CPU impact due to
logging, logging intervals may be specified to occur
less frequently than once every 60 seconds. Additional
options are available to allow the logging capture to
start in response to triggering events, such as increased
CPU activity, blocking locks, etc.
Force a Log Capture when CPU Usage Exceeds:
allows the specification of a triggering event, which
causes a log capture to take place if the total CPU
utilization (across all CPUs) of the Oracle process
exceeds the specified percentage in any 60 second time
interval. Note that if the server has two CPUs, 50% utilization may indicate that once CPU is
at 100% utilization, while the second is at 0% utilization. Force a Log Capture if No Log
Captured in Minutes: allows the specification of the maximum interval in minutes between
log captures if no other triggering event has occurred. Force a Log Capture when a Blocking
Lock is Detected: allows the specification of a triggering event, which causes a log capture to
take place if it is determined that one session actively blocked another session from completing
its transaction in any 60 second time interval. Blocking locks are captured continuously,
regardless of this setting. Force a Log Capture when a Wait Reason is Detected: allows the
specification of a triggering event, which causes a log capture to take place if it is determined
that a session waited during the previous 60 seconds and the wait reason could be identified.
Capture SQL Execution Statistics for Wait Reasons (NR): when checked, if there is a wait
reason identified, also logs the number of rows that had been processed by the session’s active
SQL statement to that point in time – this is not recommended, as it causes an additional scan
of the shared pool with limited benefit. Capture Segment Change Statistics (NR): when
checked, causes database object sizes to be included in the log file to allow examination of data
segment size changes – this is not recommended as it adds several seconds to the time required
to gather a set of log entries. Capture Operating System and Time Model Statistics:
(Oracle 10g R2 and above only) when checked, logs the amount of time the server’s CPU spent
in non-Oracle activities, as well as specific time spent by the system and sessions with
connection management, sequence refreshes, parse time (hard and failed), and PL/SQL time.
Capture High Load SQL Statement Statistics: when checked, during each log capture,
examines the first 20,000 SQL statements found in the shared pool to identify SQL statements
consuming large amounts of CPU time, experiencing excessive parses, experiencing excessive
disk reads, etc. Capture High Load SQL Statement Text: when checked, for those SQL
statements that were determined to be high load, retrieves the SQL statement and writes it to
the logging database, allowing review of the SQL statements that may be prone to being
flushed from the shared pool. Capture High Load SQL Statement Plan: when checked, for
those SQL statements that were determined to be high load, retrieves the execution plan, bind
variable definitions, and reason why the child cursor was created.
Page 19 of 35
Hyper-Extended Oracle Performance Monitor 3 (Beta)
Command Line Parameters that Affect Smart Logging:
-LC 20 Specifies Force a Log Capture when CPU Usage Exceeds value to 20%
-LI 30 Specifies Force a Log Capture if No Log Captured in Minutes value to 30 minutes
-LB Specifies the Force a Log Capture when a Blocking Lock is Detected value to checked
-LW Specifies the Force a Log Capture when a Wait Reason is Detected value to checked
-LR Specifies the Capture SQL Execution Statistics for Wait Reasons value to checked
-LD Specifies the Capture Segment Change Statistics value to checked
-LO Specifies the Capture Operating System and Time Model Statistics value to checked
-LH Specifies the Capture High Load SQL Statement Statistics value to checked
-LT Specifies the Capture High Load SQL Statement Text value to checked
-LP Specifies the Capture High Load SQL Statement Plan value to checked
-LHC 60 Species the minimum CPU time that is considered high load to 60 seconds accum.
-LHE 90 Species the minimum elapsed time that is considered high load to 90 seconds accum.
-LS Specifies that Smart Logging should begin as soon as the login completes
-LE 240 Specifies that Smart Logging should end after 240 minutes
-LQ Specifies that the program should quit (end) when logging ends
-D The Database instance SID to which the program should connect.
-U The user name to be used for connecting to the database instance.
-P The password to be used for connecting to the database instance.
On-Screen Summary During Logging:
When logging is enabled, a quick overview
screen is presented that shows a portion of
what was logged during the previous time
intervals. Session Wait Reasons and Session
Locks appear in the tables in real-time when
they happen.
On-Screen Review of Log Data:
From the main program window, click the
Review button to analyze the report data
without sending report information to
Microsoft Excel. Click the Next button to
analyze the system level statistics for the first
log interval, or click the Next 30 button to
cycle through the statistics for 30 log
intervals. Click a session’s bar in the right
chart to investigate the session.
Page 20 of 35
Hyper-Extended Oracle Performance Monitor 3 (Beta)
The top wait events for the log interval are displayed at the upper left, a graphical display of the
history for the selected statistic is displayed at the top center (blue indicates the current period),
and a graphical display of the top 30 sessions contributing to the statistic is displayed at the top
right. The bottom left provides a description of the statistic and possible additional tips for
addressing problems. The middle section provides a listing of various performance statistics
captured during the logging interval – click one of those statistics to set it as the current statistic.
When a significant event occurs, such as a spike in CPU usage, click the Investigate button to
display additional statistics.
On-Screen Review of Session Specific
Log Data:
By left-clicking a session’s bar in the
Review Log Stats window, session
specific performance statistics are
displayed for the time interval.
Page 21 of 35
Hyper-Extended Oracle Performance Monitor 3 (Beta)
The Investigate Log Interval window provides additional logging detail for the current log
interval, but it is possible to view the data for a range of log intervals. The top left grid shows
system-wide wait events encountered during the logging interval range. Immediately below the
system-wide wait events are the session level wait events. Below the session level wait events
are the session level wait event reasons that were identified during the logging. These statistics
may be used to identify data file/block level contention between sessions, hard parsing that
causes excessive wait events, etc – more information is revealed by double-clicking a row. The
bottom left grid shows details of the sessions that were blocked during the specified time
intervals – more information is revealed by double-clicking a row. The top right grid shows data
file activity during the specified time intervals – summary information is revealed by doubleclicking a row. Below the file activity is the rollback/undo activity during the specified time
intervals. Below the rollback/undo activity is the data segment changes, which shows expansion
and contraction of indexes, tables, and other objects in the database during the logging interval.
Below the data segment changes are the data block wait statistics, which related to buffer busy
wait events – more information is revealed by double-clicking a row. The bottom right shows
latch contention that occurred during the logging interval – significant latch contention can cause
performance issues – more information is revealed by double-clicking a row.
Page 22 of 35
Hyper-Extended Oracle Performance Monitor 3 (Beta)
To automatically analyze the statistics, click the Interpret Statistics button.
Page 23 of 35
Hyper-Extended Oracle Performance Monitor 3 (Beta)
The View High Load SQL option displays the SQL Monitor Review, which shows the statistics
and optionally the SQL statements and execution plans for the SQL statements that consumed
large numbers of buffer gets (logical, in-memory reads), disk reads, parse calls, execution calls,
fetches, invalidations, loads, CPU time, or elapsed time. Click a row to view the SQL statement,
or double-click a row to see the execution plan.
Page 24 of 35
Hyper-Extended Oracle Performance Monitor 3 (Beta)
On-Screen Real-Time Monitoring
Unlike the full data logging mode, the Real-Time Monitoring mode produces a much smaller
impact on database server performance, but does not provide as much detail to determine the
cause of performance issues.
The top wait events for the log interval are displayed at the upper left, a graphical display of the
selected statistic is displayed at the top center (blue indicates the current period), and a graphical
display of the top 30 sessions contributing to the statistic is displayed at the top right. The
bottom left provides a description of the statistic and possible additional tips. The middle section
provides a listing of various performance statistics captured during the logging interval – click
one of those statistics to set it as the current statistic. When a significant event occurs, such as a
spike in CPU usage, examine the statistics for additional details
Click a statistic value to set that as the current statistic. Double-click a statistic value to activate
the Real-Time Session Monitor for the statistic. Left click a bar in the chart at the top right to
display statistics specific to that session. Right click a bar in the chart at the top right to retrieve
the session wait reasons (file/block contention).
Page 25 of 35
Hyper-Extended Oracle Performance Monitor 3 (Beta)
Select a wait event from the list at the top left to graphically display the changes in that wait
event for up to 30 minutes. Session level contributors are also displayed.
.
Page 26 of 35
Hyper-Extended Oracle Performance Monitor 3 (Beta)
When the Session Watch window is activated, the Oracle database is probed three times a minute
for sources of wait events – the session wait reasons.
The Real-Time Session Monitor watches a statistic for all sessions.
Page 27 of 35
Hyper-Extended Oracle Performance Monitor 3 (Beta)
Possible Performance Tuning Method – Using the Real Time Monitor:
Monitor system wide statistics using the Real Time
Monitor until significant wait events appear, or CPU
Used by Sessions increases sharply. Left-click a
session in the chart at the top right that is a significant
contributor to the performance problem.
The statistics for that session appears on screen for
the current time interval. If the statistics are a
significant portion of the total system wide statistics,
enable tracing for the session by clicking the
Configure Session Tracing button.
Select the session of interest, specify a
10046 with Binds and Waits trace, and
click the Change Trace Settings button.
When finished, set the 10046 Trace setting
to No Trace and click the Change Trace
Settings button.
Locate the trace file in the udump folder on the
database server.
Drag the trace file from the udump folder to the
Drag & Drop Extended Trace area at the top
right of the main program window.
Page 28 of 35
Hyper-Extended Oracle Performance Monitor 3 (Beta)
Possible Performance Tuning Method – Using the Real Time Monitor:
Totals for the Trace File:
8526 Parse Calls Consuming 0.61 Seconds of Server CPU, 0.59 Seconds of Total Time, No Hard Parses
10393 Execute Calls Consuming 47.67 Seconds of Server CPU, 50.31 Seconds of Total Time,
0 Physical Reads, and 942 Consistent Read Blocks (All from Memory)
16117 Fetch Calls Consuming 263.48 Seconds of Server CPU, 263.63 Seconds of Total Time, 9,588 Rows Retrieved, 0
Physical Reads, and 8,804,908 Consistent Read Blocks (All from Memory)
Scroll Down Through the Trace Analysis File for Resource Intensive Operations:
Located Two Groups of SQL Statements, Consuming 86% of the Total Run Time
Search Keywords:
Cursor 16 Ver 1 Parse at 54.787836
Cursor 17 Ver 1 Parse at 54.793106
Page 29 of 35
Hyper-Extended Oracle Performance Monitor 3 (Beta)
Possible Performance Tuning Method – Using the Real Time Monitor:
Search for the first keyword:
Cursor 16 Ver 1 Parse at 54.787836
SQL Statement is Located
Scroll down to the row source explain plan
Note that the X_RECEIVABLE_3 index returned 60,774,446 rows and performed
111,146 consistent read requests
The X_RECEIVABLE_3 index is an index only on the ENTITY_ID column!
Page 30 of 35
Hyper-Extended Oracle Performance Monitor 3 (Beta)
Possible Performance Tuning Method – Using the Real Time Monitor:
Copy the SQL statement and bind variable
definitions to the DBMS XPLAN window and
select to translate the bind variables. Select to
include a 10053 Level 1 trace, then click the
DBMS XPLAN for Query button.
The DBMS XPLAN output shows that Oracle was
expecting the X_RECEIVABLE_3 index to return 1 row,
while it actually returned 66,124 rows. The predicate
information for ID 5 shows that the index access for this
index is on the ENTITY_ID column.
The 10053 trace shows why the X_RECEIVABLE_3 index was selected. The out-of-range peeked bind
variable value for RECV_MEMO_APPLY.INV_INVOICE_ID column in the WHERE clause caused Oracle
to use a default selectivity value for the column. This value was slightly larger than the expected index
selectivity of the X_RECEIVABLE_3 index, so the X_RECEIVABLE_3 index was selected as the best
access method.
Using a hint to avoid the X_RECEIVABLE_3 index is one method to correct the performance problem.
Page 31 of 35
Hyper-Extended Oracle Performance Monitor 3 (Beta)
Possible Performance Tuning Method - Hyper-Extended Oracle Performance Monitor:
Start the logging process from the main program window prior to an anticipated performance
problem. Once the performance problem disappears, stop logging. The logging period should
be between 30 minutes and eight hours long.
Click the Review button in the main
program window. The statistics initially
displayed are for the time period between
the time the database instance was last
started and the time of the first log
interval. The Top Wait Events indicate
the top type of delays affecting the
performance of the database - values
indicated are in seconds. Click the Next
or Next 30 buttons until the time period
of the performance problem is reached.
Select one of the Top Wait Events – the
chart at the middle shows the trend of the
wait event for up to the last 30 log
intervals (shows averages per minute within each time slice), and the chart at the right shows the
top 30 sessions contributing to the wait event for the current log interval. Review the
information related to the statistic shown at the bottom left. “ENQ: TX - ROW LOCK
CONTENTION: Waits for TX in mode 4 can occur if the session is waiting for potential
duplicates in a unique index, which would occur if a second session inserts a value before the
first tries to insert the same value, and the second session has not yet committed. P1: name|mode
P2: id1 P3: id2”
The information related to the statistic suggests that one of the causes of this wait is a potential
primary key violation, but the event may also occur if one session modified a row that another
session wishes to modify, and the first session had not yet issued a commit. There are five
sessions reporting time in this event. Additional details were found that may pinpoint the source
of the wait events, as indicated by the appearance of *Wait Reason* next to the Investigate
button.
Examine one of the sessions to see if it is
contributing to any of the other wait
events, since we can’t do much with an
off the shelf application which causes
one session to block five other sessions.
Click the bar representing the session of
interest in the far right chart. The
resulting window shows that the session
experienced 33 seconds in this wait event
and was completely idle for 15 seconds
based on the SQL*Net wait. Additional
details were found specific to this session
Page 32 of 35
Hyper-Extended Oracle Performance Monitor 3 (Beta)
which may pinpoint the source of the wait events, as indicated by the appearance of *Wait
Reason* next to the Investigate button.
Click the Investigate
button to display the
extended detail that
likely contributed to the
wait events for the
current log interval.
Drilling down, starting
with the top left grid,
we find the sessions
involved in the wait,
and locking details that
show which sessions
were blocker, the and
the session who was the
blocker.
Double-click on one of the rows
(in this case for a similar session)
to allow the program to analyze
the statistics in order to find the
cause of the wait event. The
program found that the barcode
labor collection program was
attempting to insert a row into the
INVENTORY_TRANS table, but
session 285 (using COM Objects) had already (based on the SQL statement identified) inserted
into that table with the same primary key value and for some reason had not issued a commit. At
this point, the session had been waiting 25 seconds waiting for session 285 to issue a commit, at
which point session 222 would receive a primary key violation error message.
Taking another look at the Session
Lock Conflicts grid, we find that
the blocker is also blocked, in this
case by VMGLBSCH.EXE. By
double-clicking that session’s
row, we find that the blocker
previously identified is blocked by
VMGLBSCH.EXE due to that
program modifying a row, but did
not commit. The hash value of the blocker’s SQL statement is available, but that SQL statement
had been flushed from the shared pool and was not available. The session that was blocked was
attempting to insert a row into the INVENTORY_TRANS table, but the report is showing that
Page 33 of 35
Hyper-Extended Oracle Performance Monitor 3 (Beta)
the blocking lock was actually on the REQUIREMENT table. More than likely, a trigger on the
INVENTORY_TRANS table attempted to automatically update a row in the REQUIREMENT
table, possibly to update the quantity issued for a material requirement. Oracle identified a
specific file number, data block number, and row within the data block where the blocking lock
occurred. The report generated a SQL statement which returns the primary key columns for the
row in the table so that it is possible to pinpoint the exact row.
Using the SQL statement that was
provided, we find the row that was
modified by the VMGLBSCH.EXE
program, which shows the source of
the problem. The report showed that
the session had waited 29 seconds to
modify this row, which may mean that
there is an application design issue, a
client side performance problem, or
extremely poor server performance as
the VMGLBSCH.EXE program required more than 29 seconds to commit its transaction.
To determine the source of the
problem, we take a look at the
blocker’s session level activity. In the
81 second time interval, the session
was inactive (from the server’s
prospective) for 0.19 seconds, and
spent 0.01 seconds in the log file sync
wait event (associated with client
commits). There were 91,418 round
trip communications between the
client and server, there were 281,000
block changes, 36,050 table rows read
by index lookup of which 1,770 were
migrated or chained rows, and the client consumed 31.06 seconds of the database server’s time.
During the 81 seconds, the client committed 3 times and generated roughly 44MB of redo. No
disk IO wait time, no CPU parse time, and 31 of 81 seconds on the server’s CPU might indicate
CPU contention with other sessions, but the total for all sessions was 61.4 seconds, thus
discounting this as a problem. Dividing 81 seconds by the 91,418 round trips yields 0.000886
seconds per round trip, which largely rules out the network as the problem, as well as most forms
of server contention affecting this client. The large number of round trips is most likely the most
significant problem for this session, but the small number of commits with the large number of
changes is likely the most significant problem for the five sessions which were blocked.
Page 34 of 35
Hyper-Extended Oracle Performance Monitor 3 (Beta)
If it appeared that the server’s
CPU utilization was a problem,
the program’s Investigate feature
allows statistics to be correlated
to determine what might have
contributed to the problem.
Oracle 10g R2 splits out CPU
consumption into a tree structure
by session, making it easy to see
how server CPU time was
allocated. If those statistics were
not gathered, the Investigate
feature is still very helpful when
attempting to understand what happened, as seen above.
If SQL high load SQL
statements were logged, it
would also be possible to
examine the SQL statements
hard parsed by the session
which were determined to be
high load. Scrolling through the
SQL statements, one SQL
statement consuming 16.48
seconds of server CPU time
with 33,390 executions was
located. For this time period,
the update of the OPERATION
table was the greatest
contributor to the CPU time.
Note that the bind variable for
the SEQUENCE_NO column
was defined as CHAR, and the
Oracle database instance had to
convert the supplied values in that bind variable to a number for each execution.
Page 35 of 35