Oracle Wait Events That Everyone Should Know Part -III



Last updated on July 29th, 2016 at 02:19 am

This is part III in series of Oracle Wait Events That Everyone Should Know

First two parts link are

Oracle Wait Events That Everyone Should Know Part -I

Oracle Wait Events That Everyone Should Know Part -II

Lets get going with the Third part

db file scattered read
The process has issued an I/O request to read a series of contiguous blocks from a data file into the buffer cache, and is waiting for the operation to complete. This typically happens during a full table scan or full index scan.

We should check if the query should be  using full table scan. Make sure optimizer stats are Up to date. Use Partition pruning to reduce number of blocks visited

If an query that has been running fine for a while suddenly clocks a lot of time on the db file scattered read event and there hasn’t been a code change, you might want to check to see if one or more indexes has been dropped or become unusable.

db file sequential read
The process has issued an I/O request to read one block from a data file into the buffer cache, and is waiting for the operation to complete. This typically happens during an index lookup or a fetch from a table by ROWID when the required data block is not already in memory. Do not be misled by the confusing name of this wait event!

We should be checking if the right indexes are being used. A wrong index can make the query perform badly.Make sure optimizer stats are Up to date.

db file parallel read
The process has issued multiple I/O requests in parallel to read blocks from data files into memory, and is waiting for all requests to complete. The documentation says this wait event occurs only during recovery, but in fact it also occurs during regular activity when a process batches many single block I/O requests together and issues them in parallel. (In spite of the name, you will not see this wait event during parallel query or parallel DML. In those cases wait events with PX in their names occur instead.)

db file parallel write
The process, typically DBWR, has issued multiple I/O requests in parallel to write dirty blocks from the buffer cache to disk, and is waiting for all requests to complete.

direct path read, direct path write
The process has issued asynchronous I/O requests that bypass the buffer cache, and is waiting for them to complete. These wait events typically involve sort segments.

SQL statements with functions that require sorts, such as ORDER BY, GROUP BY, UNION, DISTINCT, and ROLLUP, write sort runs to the temporary tablespace when the input size is larger than workarea in PGA

Make sure the optimizer stats are up to data and query is using  the right driving table. Check to see if the composite index’s columns can be rearranged to match the ORDER BY clause to avoid sort entirely.

Make sure appropriate value  PGA_AGGREGATE_TARGET is set.  If possible use UNION ALL instead of UNION.

Shared pool latch

The shared pool latch is used to protect critical operations when allocating and freeing memory in the shared pool. Contentions for the shared pool and library cache latches are mainly due to intense hard  parsing. A hard parse applies to new cursors and cursors that are aged out and must be re-executed
The cost of parsing a new SQL statement is expensive both in terms of CPU requirements and the number of times  the library cache and shared pool latches  may need to be acquired and released.

Eliminating literal SQL is also useful to avoid the shared pool latch


Leave a Reply