db file scattered read 
Author Message
 db file scattered read

Decided to give the analysis reports at Oraperf a try.  Began taking
15 minutes worth of bstat/estat at the top of every hour through the
work day.  Fed the results to Oraperf.  Consistently shows
db_file_scattere_read to be by far the biggest culprit on wait time.  

Total response time: 10 to 30 seconds
CPU: 1 to 6 seconds
Wait: 8 to 27 seconds
db_file_scattered_read is 57 to 62 pct of total wait time.
db_file_sequential_read is third biggest wait time, with 3 to 7 pct of
total wait.

Recommendation was to reduce db_file_multiblock_read_count in order to
get the optimizer to favor index over table scans.  So we reduced it
from 256 to 32.  Wait times went UP, ranging from 16 to 44 seconds,
with  db_file_scattered_read at 37 to 77 pct of total wait time.
db_file_sequential_read went to 2d biggest wait event with 24 to 37
pct of total wait

Read some more docs, browsed MetaLink, decided that perhaps at 256 we
were over some boundary at which Oracle took over and imposed its own
values and at 32 it took mine, which were worse than what it had
imposed at 256.  Some readings on MetaLink suggested for OLTP apps on
NT to set multiblock_read_count to 8,  so we went with that.  The
phone lit up and I had to change it back to 32 before I could even get
a reading.  A few more calls from users and we went back to 256.

While I expected a percentage shift of total wait to move from
scattered read to sequential read, I really didn't expect the total
response to go down the toilet the way it did.  The worst reponse time
reading I had still showed scattered reads to be the major culprit.
The shift in percentage of total wait wasn't so much from scattered
reads to sequential as it was from SQLNet Msg from DBLink to
sequential reads.

Comments?  Observations?

Oracle 8.0.5 SE, NT 4.0, OLTP app.
--
Ed Stevens
(Opinions expressed do not necessarily represent those of my employer.)



Wed, 24 Mar 2004 05:28:14 GMT
 db file scattered read

The best tuning in your case is tuning SQL statements. Look at explain
plan and rewrite them or add hints. At the same time make sure you
have updated statistics (if you're not using RBO), and properly built,
well structured indexes. If you can't control the third-party SQLs,
control the indexes.

System tuning always comes second. If you do want to play with the
multiblock read count, don't go to two extremes. Benchmark with
various values in the middle.

Yong Huang



Wed, 24 Mar 2004 11:53:15 GMT
 db file scattered read

I would suggest tracking down the SQL that is doing these full
table scans.

Steve


Quote:
> Decided to give the analysis reports at Oraperf a try.  Began taking
> 15 minutes worth of bstat/estat at the top of every hour through the
> work day.  Fed the results to Oraperf.  Consistently shows
> db_file_scattere_read to be by far the biggest culprit on wait time.

> Total response time: 10 to 30 seconds
> CPU: 1 to 6 seconds
> Wait: 8 to 27 seconds
> db_file_scattered_read is 57 to 62 pct of total wait time.
> db_file_sequential_read is third biggest wait time, with 3 to 7 pct of
> total wait.

> Recommendation was to reduce db_file_multiblock_read_count in order to
> get the optimizer to favor index over table scans.  So we reduced it
> from 256 to 32.  Wait times went UP, ranging from 16 to 44 seconds,
> with  db_file_scattered_read at 37 to 77 pct of total wait time.
> db_file_sequential_read went to 2d biggest wait event with 24 to 37
> pct of total wait

> Read some more docs, browsed MetaLink, decided that perhaps at 256 we
> were over some boundary at which Oracle took over and imposed its own
> values and at 32 it took mine, which were worse than what it had
> imposed at 256.  Some readings on MetaLink suggested for OLTP apps on
> NT to set multiblock_read_count to 8,  so we went with that.  The
> phone lit up and I had to change it back to 32 before I could even get
> a reading.  A few more calls from users and we went back to 256.

> While I expected a percentage shift of total wait to move from
> scattered read to sequential read, I really didn't expect the total
> response to go down the toilet the way it did.  The worst reponse time
> reading I had still showed scattered reads to be the major culprit.
> The shift in percentage of total wait wasn't so much from scattered
> reads to sequential as it was from SQLNet Msg from DBLink to
> sequential reads.

> Comments?  Observations?

> Oracle 8.0.5 SE, NT 4.0, OLTP app.
> --
> Ed Stevens
> (Opinions expressed do not necessarily represent those of my employer.)



Fri, 26 Mar 2004 22:03:00 GMT
 db file scattered read
Ed-

Since you are running 8.0.5, you might consider installed Statspack
instead of using estat. Get the 8.1.6 version of Statspack from
Metalink, read the instructions to retrofit to 8.0.x and invest less
than five minutes to upgrade your performance monitoring. You probably
want to give it to a "tools" tablespace or something not on your
heaviest hit disk. No real performance hog, contrary to common fears.
Excellent tool, and oraperf understands it. No-brainer to plug in. Use
the NT "at" or "winat" facility to schedule the snaps. Use a 30 minute
snap interval for normal performance monitoring, with
timed_statistics=true and run it that way during "business" hours of
usage.

Now to your specific issue:

Take a look at the tablespace and file I/O on the report. Estat has this
information, but you have to calculate average read/write times for
yourself. Look for any read/write average that stands out higher than
the rest. Normal read times ought to be in the 20 millisecond range or
less, writes typically higher but > 40 ms. is high. So, you are looking
for overall slowness of disk but - most important - spikes of contention
on a particular disk / tablespace. Having found that, think about
spreading objects around to smooth out the I/O. If you see a file with
average I/O around 150 milliseconds, for example, that baby is in
trouble. Temporary tablespace and rollback segment tablespaces should
have their own drives in any case, don't put those with high activity
oltp objects. Think "smooth" and "distributed" when it comes to I/O
tuning.

db_file_multiblock_read_count really needs to be as accurate as
possible. If yours is set artificially high and a realistic setting
makes things worse, then probably something else is wrong, like a bad
CBO plan or two. Statspack will show you the sql statements with the
highest buffer gets.

Having identified some baddies, you can tune individual statements with
whatever tool you like. I get great results from sql*plus with
autotrace. Dumb little tool but simple and direct. Also, you can use the
10046 event to trace wait times and get bind values for shareable sql,
but it sounds like your real problem is I/O.

You might need to invest in more disk if overall I/O is slow, but if you
find the bottleneck as I suspect, you can simply move busy objects to
less contentious drives. I have seen throughput quadruple with a simple
object redistribution.

If the Statspack report seems overwhelming, post it here and I will give
you a tour of the report.

By the way, once you plug in the 8.1.6 Statspack, you can easily upgrade
to the 8.1.7 version with a script. Better reporting, shows the entire
sql statements, gives nice enqueue reporting, stuff like that.

- ricky

Quote:

> Decided to give the analysis reports at Oraperf a try.  Began taking
> 15 minutes worth of bstat/estat at the top of every hour through the
> work day.  Fed the results to Oraperf.  Consistently shows
> db_file_scattere_read to be by far the biggest culprit on wait time.

> Total response time: 10 to 30 seconds
> CPU: 1 to 6 seconds
> Wait: 8 to 27 seconds
> db_file_scattered_read is 57 to 62 pct of total wait time.
> db_file_sequential_read is third biggest wait time, with 3 to 7 pct of
> total wait.

> Recommendation was to reduce db_file_multiblock_read_count in order to
> get the optimizer to favor index over table scans.  So we reduced it
> from 256 to 32.  Wait times went UP, ranging from 16 to 44 seconds,
> with  db_file_scattered_read at 37 to 77 pct of total wait time.
> db_file_sequential_read went to 2d biggest wait event with 24 to 37
> pct of total wait

> Read some more docs, browsed MetaLink, decided that perhaps at 256 we
> were over some boundary at which Oracle took over and imposed its own
> values and at 32 it took mine, which were worse than what it had
> imposed at 256.  Some readings on MetaLink suggested for OLTP apps on
> NT to set multiblock_read_count to 8,  so we went with that.  The
> phone lit up and I had to change it back to 32 before I could even get
> a reading.  A few more calls from users and we went back to 256.

> While I expected a percentage shift of total wait to move from
> scattered read to sequential read, I really didn't expect the total
> response to go down the toilet the way it did.  The worst reponse time
> reading I had still showed scattered reads to be the major culprit.
> The shift in percentage of total wait wasn't so much from scattered
> reads to sequential as it was from SQLNet Msg from DBLink to
> sequential reads.

> Comments?  Observations?

> Oracle 8.0.5 SE, NT 4.0, OLTP app.
> --
> Ed Stevens
> (Opinions expressed do not necessarily represent those of my employer.)



Tue, 30 Mar 2004 10:12:22 GMT
 db file scattered read
Thanks for the reply.  At this point I am suspecting a combination of
I/O contention and inherently poor sql.  On smoothing the I/O,
management has insisted on putting all the available devices into a
single RAID-5 array.  I finally managed to talk them into breaking 2
devices off into a mirror set for my archive log files, but they are
pretty adamant about the not getting a hardware setup any more complex
than that.  To add insult to injury, two of our servers are hosting
multiple data bases.  The result of all that is that any "relocation"
of objects is just an illusion and means nothing when it gets to the
physical I/O.  I would love to get some hard numbers on our system to
show what the performace cost of this configuration is.

I'll give the statspack a try.  Thanks for the pointer.



Quote:
>Ed-

>Since you are running 8.0.5, you might consider installed Statspack
>instead of using estat. Get the 8.1.6 version of Statspack from
>Metalink, read the instructions to retrofit to 8.0.x and invest less
>than five minutes to upgrade your performance monitoring. You probably
>want to give it to a "tools" tablespace or something not on your
>heaviest hit disk. No real performance hog, contrary to common fears.
>Excellent tool, and oraperf understands it. No-brainer to plug in. Use
>the NT "at" or "winat" facility to schedule the snaps. Use a 30 minute
>snap interval for normal performance monitoring, with
>timed_statistics=true and run it that way during "business" hours of
>usage.

>Now to your specific issue:

>Take a look at the tablespace and file I/O on the report. Estat has this
>information, but you have to calculate average read/write times for
>yourself. Look for any read/write average that stands out higher than
>the rest. Normal read times ought to be in the 20 millisecond range or
>less, writes typically higher but > 40 ms. is high. So, you are looking
>for overall slowness of disk but - most important - spikes of contention
>on a particular disk / tablespace. Having found that, think about
>spreading objects around to smooth out the I/O. If you see a file with
>average I/O around 150 milliseconds, for example, that baby is in
>trouble. Temporary tablespace and rollback segment tablespaces should
>have their own drives in any case, don't put those with high activity
>oltp objects. Think "smooth" and "distributed" when it comes to I/O
>tuning.

>db_file_multiblock_read_count really needs to be as accurate as
>possible. If yours is set artificially high and a realistic setting
>makes things worse, then probably something else is wrong, like a bad
>CBO plan or two. Statspack will show you the sql statements with the
>highest buffer gets.

>Having identified some baddies, you can tune individual statements with
>whatever tool you like. I get great results from sql*plus with
>autotrace. Dumb little tool but simple and direct. Also, you can use the
>10046 event to trace wait times and get bind values for shareable sql,
>but it sounds like your real problem is I/O.

>You might need to invest in more disk if overall I/O is slow, but if you
>find the bottleneck as I suspect, you can simply move busy objects to
>less contentious drives. I have seen throughput quadruple with a simple
>object redistribution.

>If the Statspack report seems overwhelming, post it here and I will give
>you a tour of the report.

>By the way, once you plug in the 8.1.6 Statspack, you can easily upgrade
>to the 8.1.7 version with a script. Better reporting, shows the entire
>sql statements, gives nice enqueue reporting, stuff like that.

>- ricky


>> Decided to give the analysis reports at Oraperf a try.  Began taking
>> 15 minutes worth of bstat/estat at the top of every hour through the
>> work day.  Fed the results to Oraperf.  Consistently shows
>> db_file_scattere_read to be by far the biggest culprit on wait time.

>> Total response time: 10 to 30 seconds
>> CPU: 1 to 6 seconds
>> Wait: 8 to 27 seconds
>> db_file_scattered_read is 57 to 62 pct of total wait time.
>> db_file_sequential_read is third biggest wait time, with 3 to 7 pct of
>> total wait.

>> Recommendation was to reduce db_file_multiblock_read_count in order to
>> get the optimizer to favor index over table scans.  So we reduced it
>> from 256 to 32.  Wait times went UP, ranging from 16 to 44 seconds,
>> with  db_file_scattered_read at 37 to 77 pct of total wait time.
>> db_file_sequential_read went to 2d biggest wait event with 24 to 37
>> pct of total wait

>> Read some more docs, browsed MetaLink, decided that perhaps at 256 we
>> were over some boundary at which Oracle took over and imposed its own
>> values and at 32 it took mine, which were worse than what it had
>> imposed at 256.  Some readings on MetaLink suggested for OLTP apps on
>> NT to set multiblock_read_count to 8,  so we went with that.  The
>> phone lit up and I had to change it back to 32 before I could even get
>> a reading.  A few more calls from users and we went back to 256.

>> While I expected a percentage shift of total wait to move from
>> scattered read to sequential read, I really didn't expect the total
>> response to go down the toilet the way it did.  The worst reponse time
>> reading I had still showed scattered reads to be the major culprit.
>> The shift in percentage of total wait wasn't so much from scattered
>> reads to sequential as it was from SQLNet Msg from DBLink to
>> sequential reads.

>> Comments?  Observations?

>> Oracle 8.0.5 SE, NT 4.0, OLTP app.
>> --
>> Ed Stevens
>> (Opinions expressed do not necessarily represent those of my employer.)

--
Ed Stevens
(Opinions expressed do not necessarily represent those of my employer.)


Sun, 04 Apr 2004 03:28:47 GMT
 
 [ 5 post ] 

 Relevant Pages 

1. db file scattered read

2. db file scattered read

3. "db file scattered read" and "db file sequential read" wait events

4. db file sequential/scattered read

5. Reqd help for reading flat text file throgh OLE DB

6. VB wont read DB file

7. Reading locked paradox (db) files

8. reading DB IV files

9. reading in a tab delimited file and sending to db

10. Can I read Access DB file structure

11. Read/Write to DB files

12. read in file write to db


 
Powered by phpBB® Forum Software