DB2 5.2 Performance Testing 
Author Message
 DB2 5.2 Performance Testing

We are running DB2 5.2 (Fixpack 11) and are doing some performance testing.
We have tried to get them to upgrade to 7.1 but they insist on 5.2 for the
time being.  We set up a test with DB2 running on an RS6000 7053 with 256
Meg of RAM that creates sales transactions.  The client is running on NT 4
SP6 via a 100 Mbit network line.

The bufferpools have been increased (db2 alter bufferpool IBMDEFAULTBP SIZE
20000)

After a 100,000 or so inserts into the sales tables (Headers, Details, and
Payment) there is a noticeable slowdown in the time to insert a new sale
(starts at 2 seconds and increases to 10-11 seconds). On checking this
newsgroup the most common thing to do is do the RUNSTATS.  I do this and the
time per sale comes down to the 2-3 second range.  This lasts until another
100,000 or so inserts are done.

This doesn't seem right to me.  Why would I have to keep doing RUNSTATS all
of the time?  Shouldn't the database be able to deal with straight INSERTS?
I was under the impression that RUNSTATS affected the query plans, etc. but
the majority of the queries are to inventory items that should already be
cached.  The only new lookups would be for the contstraints on the details
and payments ensuring that a header exists.

Can someone set me straight on this?  Do I really have to be doing RUNSTATS
all of the time just to keep the performance acceptable?

Thanks for your help

Richard Speiss



Sat, 05 Jul 2003 00:42:35 GMT
 DB2 5.2 Performance Testing

Hi Richard,

Quote:
> This doesn't seem right to me.  Why would I have to keep doing
RUNSTATS all
> of the time?  Shouldn't the database be able to deal with straight
INSERTS?
> I was under the impression that RUNSTATS affected the query plans,
etc. but
> the majority of the queries are to inventory items that should already
be
> cached.  The only new lookups would be for the contstraints on the
details
> and payments ensuring that a header exists.

> Can someone set me straight on this?  Do I really have to be doing
RUNSTATS
> all of the time just to keep the performance acceptable?

I can't venture into details, because I don't know either :-), but, if
you have reason to believe things like RUNSTATS and REORG do make a
difference - wellcome, you are right.

We do a global RUNSTATS on our data warehouse (mid sized, takes about
half an hour) every night, and a global REORG every weekend. It helps to
go with option WITH DISTRIBUTION AND DETAILED INDEXES ALL for the
runstats - but beware, if you build up detailed statistics, you better
keep them up to date.

If DB2 optimizer feels it has got no valid statistics at hand, it might
completely change it's execution plan, and therefore you might really
want to do this RUNSTATS/REORG thing. This can make the difference
between minutes and hours.

Johann Sander

Sent via Deja.com
http://www.deja.com/



Sat, 05 Jul 2003 04:59:49 GMT
 DB2 5.2 Performance Testing
Everyone seems to agree that they are good to do but no one really seems to
understand why<sigh>.  Does anyone know if we still have to do this constant
RUNSTATS on 7.1 or is it smart enough to take care of itself?

Thanks
Richard Speiss

Quote:

> Hi Richard,

> > This doesn't seem right to me.  Why would I have to keep doing
> RUNSTATS all
> > of the time?  Shouldn't the database be able to deal with straight
> INSERTS?
> > I was under the impression that RUNSTATS affected the query plans,
> etc. but
> > the majority of the queries are to inventory items that should already
> be
> > cached.  The only new lookups would be for the contstraints on the
> details
> > and payments ensuring that a header exists.

> > Can someone set me straight on this?  Do I really have to be doing
> RUNSTATS
> > all of the time just to keep the performance acceptable?

> I can't venture into details, because I don't know either :-), but, if
> you have reason to believe things like RUNSTATS and REORG do make a
> difference - wellcome, you are right.

> We do a global RUNSTATS on our data warehouse (mid sized, takes about
> half an hour) every night, and a global REORG every weekend. It helps to
> go with option WITH DISTRIBUTION AND DETAILED INDEXES ALL for the
> runstats - but beware, if you build up detailed statistics, you better
> keep them up to date.

> If DB2 optimizer feels it has got no valid statistics at hand, it might
> completely change it's execution plan, and therefore you might really
> want to do this RUNSTATS/REORG thing. This can make the difference
> between minutes and hours.

> Johann Sander

> Sent via Deja.com
> http://www.deja.com/



Sat, 05 Jul 2003 23:15:43 GMT
 DB2 5.2 Performance Testing
Statistics are used by the query optimizer to model the cost of
satisfying a query in various different ways and choosing the cheapest
plan. Let's take a trivial case:
        select * from t1 where c1 = 1
What's better, an index access to find out which rows have c1=1 followed
by a fetch from the base table for only the rows I need? Or a straight
table scan?
- If t1 has only one row, then a straight table scan is cheaper.
- If t1 has a million rows, then an index scan is probably better. Oh,
but wait:
- If "c1=1" is satisfied by 99% of the rows in the index, then a table
scan is still better.

The optimizer uses the statistics to find out these facts. If I modify
the table significantly, then the stats no longer reflect reality and
the optimizer might make an inappropriate choice. If the stats say t1
has 1 row but in reality it has a million, then you'll get an
inappropriate plan.

In v71 the stats are not automatically updated - you still have to do
RUNSTATS. But maybe you can use the above explanation of their purpose
to figure out just how often, and on what tables you should do a
RUNSTATS. Typically, if I update 20 rows in a million row table, it's
not going to make a heck of a difference. If I delete half the rows then
it probably matters. If my queries are very straightforward with obvious
best plans, then it won't matter as much as if my queries are complex
and involve multi-table joins where getting the join order wrong can
increase the execution time by orders of magnitude.

Hope this helps,

Quote:

> Everyone seems to agree that they are good to do but no one really seems to
> understand why<sigh>.  Does anyone know if we still have to do this constant
> RUNSTATS on 7.1 or is it smart enough to take care of itself?

> Thanks
> Richard Speiss


> > Hi Richard,

> > > This doesn't seem right to me.  Why would I have to keep doing
> > RUNSTATS all
> > > of the time?  Shouldn't the database be able to deal with straight
> > INSERTS?
> > > I was under the impression that RUNSTATS affected the query plans,
> > etc. but
> > > the majority of the queries are to inventory items that should already
> > be
> > > cached.  The only new lookups would be for the contstraints on the
> > details
> > > and payments ensuring that a header exists.

> > > Can someone set me straight on this?  Do I really have to be doing
> > RUNSTATS
> > > all of the time just to keep the performance acceptable?

> > I can't venture into details, because I don't know either :-), but, if
> > you have reason to believe things like RUNSTATS and REORG do make a
> > difference - wellcome, you are right.

> > We do a global RUNSTATS on our data warehouse (mid sized, takes about
> > half an hour) every night, and a global REORG every weekend. It helps to
> > go with option WITH DISTRIBUTION AND DETAILED INDEXES ALL for the
> > runstats - but beware, if you build up detailed statistics, you better
> > keep them up to date.

> > If DB2 optimizer feels it has got no valid statistics at hand, it might
> > completely change it's execution plan, and therefore you might really
> > want to do this RUNSTATS/REORG thing. This can make the difference
> > between minutes and hours.

> > Johann Sander

> > Sent via Deja.com
> > http://www.deja.com/

--

Kaarel Truuvert
DB2 UDB Development



Sun, 06 Jul 2003 00:51:53 GMT
 DB2 5.2 Performance Testing
Thanks for the reply.  Your explanation makes the process far clearer to me
and I think it will help me out a lot.

Thanks again

Richard Speiss


Quote:
> Statistics are used by the query optimizer to model the cost of
> satisfying a query in various different ways and choosing the cheapest
> plan. Let's take a trivial case:
> select * from t1 where c1 = 1
> What's better, an index access to find out which rows have c1=1 followed
> by a fetch from the base table for only the rows I need? Or a straight
> table scan?
> - If t1 has only one row, then a straight table scan is cheaper.
> - If t1 has a million rows, then an index scan is probably better. Oh,
> but wait:
> - If "c1=1" is satisfied by 99% of the rows in the index, then a table
> scan is still better.

> The optimizer uses the statistics to find out these facts. If I modify
> the table significantly, then the stats no longer reflect reality and
> the optimizer might make an inappropriate choice. If the stats say t1
> has 1 row but in reality it has a million, then you'll get an
> inappropriate plan.

> In v71 the stats are not automatically updated - you still have to do
> RUNSTATS. But maybe you can use the above explanation of their purpose
> to figure out just how often, and on what tables you should do a
> RUNSTATS. Typically, if I update 20 rows in a million row table, it's
> not going to make a heck of a difference. If I delete half the rows then
> it probably matters. If my queries are very straightforward with obvious
> best plans, then it won't matter as much as if my queries are complex
> and involve multi-table joins where getting the join order wrong can
> increase the execution time by orders of magnitude.

> Hope this helps,


> > Everyone seems to agree that they are good to do but no one really seems
to
> > understand why<sigh>.  Does anyone know if we still have to do this
constant
> > RUNSTATS on 7.1 or is it smart enough to take care of itself?

> > Thanks
> > Richard Speiss




- Show quoted text -

Quote:
> > > Hi Richard,

> > > > This doesn't seem right to me.  Why would I have to keep doing
> > > RUNSTATS all
> > > > of the time?  Shouldn't the database be able to deal with straight
> > > INSERTS?
> > > > I was under the impression that RUNSTATS affected the query plans,
> > > etc. but
> > > > the majority of the queries are to inventory items that should
already
> > > be
> > > > cached.  The only new lookups would be for the contstraints on the
> > > details
> > > > and payments ensuring that a header exists.

> > > > Can someone set me straight on this?  Do I really have to be doing
> > > RUNSTATS
> > > > all of the time just to keep the performance acceptable?

> > > I can't venture into details, because I don't know either :-), but, if
> > > you have reason to believe things like RUNSTATS and REORG do make a
> > > difference - wellcome, you are right.

> > > We do a global RUNSTATS on our data warehouse (mid sized, takes about
> > > half an hour) every night, and a global REORG every weekend. It helps
to
> > > go with option WITH DISTRIBUTION AND DETAILED INDEXES ALL for the
> > > runstats - but beware, if you build up detailed statistics, you better
> > > keep them up to date.

> > > If DB2 optimizer feels it has got no valid statistics at hand, it
might
> > > completely change it's execution plan, and therefore you might really
> > > want to do this RUNSTATS/REORG thing. This can make the difference
> > > between minutes and hours.

> > > Johann Sander

> > > Sent via Deja.com
> > > http://www.deja.com/

> --

> Kaarel Truuvert
> DB2 UDB Development



Tue, 08 Jul 2003 05:37:08 GMT
 DB2 5.2 Performance Testing
Have you investigated the use of volatile table attribute? (just to make a test
and see how it affects the performance)
ALTER TABLE TABLENAME VOLATILE CARDINALITY  (read the command documentation)
(see also the append option, ... ?)

detail and payments indexes are optimal? (do you runstats/reorg these also and
how often?)

Do you have insert triggers that could impact indirectly.

Quote:

> We are running DB2 5.2 (Fixpack 11) and are doing some performance testing.
> We have tried to get them to upgrade to 7.1 but they insist on 5.2 for the
> time being.  We set up a test with DB2 running on an RS6000 7053 with 256
> Meg of RAM that creates sales transactions.  The client is running on NT 4
> SP6 via a 100 Mbit network line.

> The bufferpools have been increased (db2 alter bufferpool IBMDEFAULTBP SIZE
> 20000)

> After a 100,000 or so inserts into the sales tables (Headers, Details, and
> Payment) there is a noticeable slowdown in the time to insert a new sale
> (starts at 2 seconds and increases to 10-11 seconds). On checking this
> newsgroup the most common thing to do is do the RUNSTATS.  I do this and the
> time per sale comes down to the 2-3 second range.  This lasts until another
> 100,000 or so inserts are done.

> This doesn't seem right to me.  Why would I have to keep doing RUNSTATS all
> of the time?  Shouldn't the database be able to deal with straight INSERTS?
> I was under the impression that RUNSTATS affected the query plans, etc. but
> the majority of the queries are to inventory items that should already be
> cached.  The only new lookups would be for the contstraints on the details
> and payments ensuring that a header exists.

> Can someone set me straight on this?  Do I really have to be doing RUNSTATS
> all of the time just to keep the performance acceptable?

> Thanks for your help

> Richard Speiss



Wed, 09 Jul 2003 14:56:53 GMT
 DB2 5.2 Performance Testing

Hi Richard:

How much free memory do you have when this is going on?  256 doesn't seem
like much for this application.
I calculate 20,000 4kb pages would be 80 megabyte of memory for just that
database.
I bet you are close to capacity once everything is totaled up.

Bill
Saint Louis, MO USA



                                         cc:                                                                  
                    01/15/2001           Subject:     DB2 5.2 Performance Testing                            
                    10:42 AM                                                                                  
                    Please                                                                                    
                    respond to                                                                                
                    comp.database                                                                            
                    s.ibm-db2                                                                                

We are running DB2 5.2 (Fixpack 11) and are doing some performance testing.
We have tried to get them to upgrade to 7.1 but they insist on 5.2 for the
time being.  We set up a test with DB2 running on an RS6000 7053 with 256
Meg of RAM that creates sales transactions.  The client is running on NT 4
SP6 via a 100 Mbit network line.

The bufferpools have been increased (db2 alter bufferpool IBMDEFAULTBP SIZE
20000)

After a 100,000 or so inserts into the sales tables (Headers, Details, and
Payment) there is a noticeable slowdown in the time to insert a new sale
(starts at 2 seconds and increases to 10-11 seconds). On checking this
newsgroup the most common thing to do is do the RUNSTATS.  I do this and
the
time per sale comes down to the 2-3 second range.  This lasts until another
100,000 or so inserts are done.

 Sent via Deja.com http://www.deja.com/
 Before you buy.



Sat, 12 Jul 2003 11:42:25 GMT
 DB2 5.2 Performance Testing

I understand RUNSTATS was taught in the IBM DB2 7.1 class last week in
Saint Louis.  I'll let you draw your own conclusions...

Bill
Saint Louis, MO USA



                                         cc:                                                                  
                    01/16/2001           Subject:     Re: DB2 5.2 Performance Testing                        
                    09:15 AM                                                                                  
                    Please                                                                                    
                    respond to                                                                                
                    comp.database                                                                            
                    s.ibm-db2                                                                                

Everyone seems to agree that they are good to do but no one really seems to
understand why<sigh>.  Does anyone know if we still have to do this
constant
RUNSTATS on 7.1 or is it smart enough to take care of itself?

 Sent via Deja.com http://www.deja.com/
 Before you buy.



Sun, 13 Jul 2003 12:39:49 GMT
 DB2 5.2 Performance Testing
More details related to my comment on "optimal indexes" :

You may experience index degradation.
Do you use clustered indexes?
Is PCTFREE of table/index well tuned for your needs?

Have you read on multi pages allocation (since you do lots of inserts...)
db2empfa tool in sqllib/bin (verify if it's available on 5.2 ???)

What do the reorg utilities report after you inserted your 100k rows?

PM

Quote:

> Have you investigated the use of volatile table attribute? (just to make a test
> and see how it affects the performance)
> ALTER TABLE TABLENAME VOLATILE CARDINALITY  (read the command documentation)
> (see also the append option, ... ?)

> detail and payments indexes are optimal? (do you runstats/reorg these also and
> how often?)

> Do you have insert triggers that could impact indirectly.


> > We are running DB2 5.2 (Fixpack 11) and are doing some performance testing.
> > We have tried to get them to upgrade to 7.1 but they insist on 5.2 for the
> > time being.  We set up a test with DB2 running on an RS6000 7053 with 256
> > Meg of RAM that creates sales transactions.  The client is running on NT 4
> > SP6 via a 100 Mbit network line.

> > The bufferpools have been increased (db2 alter bufferpool IBMDEFAULTBP SIZE
> > 20000)

> > After a 100,000 or so inserts into the sales tables (Headers, Details, and
> > Payment) there is a noticeable slowdown in the time to insert a new sale
> > (starts at 2 seconds and increases to 10-11 seconds). On checking this
> > newsgroup the most common thing to do is do the RUNSTATS.  I do this and the
> > time per sale comes down to the 2-3 second range.  This lasts until another
> > 100,000 or so inserts are done.

> > This doesn't seem right to me.  Why would I have to keep doing RUNSTATS all
> > of the time?  Shouldn't the database be able to deal with straight INSERTS?
> > I was under the impression that RUNSTATS affected the query plans, etc. but
> > the majority of the queries are to inventory items that should already be
> > cached.  The only new lookups would be for the contstraints on the details
> > and payments ensuring that a header exists.

> > Can someone set me straight on this?  Do I really have to be doing RUNSTATS
> > all of the time just to keep the performance acceptable?

> > Thanks for your help

> > Richard Speiss



Fri, 18 Jul 2003 06:40:28 GMT
 
 [ 9 post ] 

 Relevant Pages 

1. ADO 2.5, DB2 5.2, MTS Testing

2. Strange Insert/Performance Problem with DB2/NT 5.2

3. DB2 Newbie needs help with DB2 5.2 ODBC: can browse schema, but get no data

4. Error in Java Program, when it accesses DB2 UDB 7.2 from DB2 UDB 5.2 Client

5. Connecting to db2 v7.1(linux) from db2 5.2 cc(nt)

6. Connect to db2 5.2 Server from db2 7.1

7. Need automated testing software for Clipper 5.2

8. Migrating from IBM DB2 5.2

9. SQL Server 7.0 to DB2 5.2 connection no working

10. DB2 UDB 5.2 and Decimal Fields

11. Java DB2 5.2 known bugs

12. ADO 2.5 & DB2 5.2


 
Powered by phpBB® Forum Software