I have been investigating “enq: TM – contention” wait events, which are almost certainly due to un-indexed foreign keys. One such extreme example which showed up in a Statspack report is as follows :
Avg %Total %Tim Total Wait wait Waits Call Event Waits out Time (s) (ms) /txn Time ---------------------------- ------------ ---- ---------- ------ -------- ------ enq: TM - contention 1 0 5,973 ###### 0.0 58.6
This suggest a single SQL statement waited nearly two hours for another session to commit. I thought it should be easy enough to find this in the “SQL ordered by Elapsed time” section of the report, but it is not there. I’ll explain why I think this is.
Studying the SQL script which generates the Statspack report (sprepins.sql), it can be seen that “SQL ordered by Elapsed time” is obtained from the STATS$TEMP_SQLSTATS table, which is in turn populated from STATS$SQL_SUMMARY. I checked STATS$SQL_SUMMARY, and there were no entries with a large enough ELAPSED_TIME to be the culprit.
So then I looked at the STATSPACK package, which populates STATS$SQL_SUMMARY. There are two insert statements which populate STATS$SQL_SUMMARY, both of which contain the following WHERE clause to limit which SQL statements are captured :
where ( buffer_gets > l_buffer_gets_th or disk_reads > l_disk_reads_th or parse_calls > l_parse_calls_th or executions > l_executions_th or sharable_mem > l_sharable_mem_th or version_count > l_version_count_th )
The various thresholds are taken from the STATS$STATSPACK_PARAMETER table, and the default values are
- buffer gets – 10,000
- disk reads – 1,000
- parse calls – 1,000
- executions – 100
- sharable memory – 1,048,576
- version count – 20
What is missing from this set of criteria is execution time. In the case of TM contention, I have a SQL statement which can have an extremely long execution time, but because it does not pass any of the given thresholds, it will not be captured by Statspack. Obviously the majority of long-running SQL statements will exceed at least one of these thresholds, but in the case of lock waits, they may not. I would have thought that any SQL that runs for more than 10-20 seconds would be worth capturing in Statspack.
It would be possible to change one of the listed thresholds to capture more SQL (e.g. executions => 0), but this means that all SQL is captured, which could lead to excessive data build-up in the Statspack schema.
I can’t see any way to configure Statspack to capture SQL based solely on its execution time. I guess it would be possible to modify the STATSPACK package, but that introduces risks to the overall stability of Statspack.