Why do Table Indexes Need Rebuilding? 
Author Message
 Why do Table Indexes Need Rebuilding?

We have a 16-bit Windows ODBC-based application which builds and
maintains a database.  The primary database we use is SQL Server
(6.0/6.5).  The problem we are having is that as our databases grow,
the performance (SELECT, UPDATE, etc.) declines rapidly.  We find,
however, that by programmatically dropping and recreating our assorted
table indexes (or rebuilding them through Enterprise Manager), we are
able to regain the performance we expect.  I am wondering why this
process is necessary, though.  Isn't it the database's job to maintain
efficient indexes?

This has become a major issue as our customers with larger databases
(millions of records) are complaining not only of poor performance,
but of the long time necessary to rebuild the table indexes.

A few items of note:

All tables and indexes are created programmatically.

We are using very generic DDL to create the indexes, and use no SQL
Server-specific features, such as clustered indexes.

The number of indexes per table ranges from 1 to 3.  We have roughly
20 tables in a typical application.

We have *no* primary or foreign keys defined right now.  I'm wondering
if this might be a cause of concern.

I would appreciate any help!

Thanks,
Brent Southard



Sat, 29 Apr 2000 03:00:00 GMT
 Why do Table Indexes Need Rebuilding?

steve:

Thanks for your help.  I think we may also want to institute an
automated nightly script similar to yours.  However, I'm not sure I
did a very good job describing the extent of our problem.  Please see
my own message followup for more information.

'Cheers',
Brent

On Tue, 11 Nov 1997 07:00:25 -0800, "steve Robinson"

Quote:

>Brent,

>The reason why you need to rebuild your indexes is because over time as
>data is inserted and removed there is a possability things will become
>fragmented and the original fill factor will no longer exist (it is not
>maintained by SQL server after the original build).  SQL Server does
>reclaim space but only to a certain level.  Also as data is added the
>statistics / density becomes out of date and may be misleading.  When
>stored procedures pass varaibles it is this density value the optimiser
>uses to look at the viability of the index - if it is out of date the wrong
>execution plan may be used.  
>You can run update statistics on the tables so that the latter problem is
>removed but I tend to run dbcc reindex on all the tables in the database
>(this depends on your time window though).  Dont forget to run sp_recompile
>on the table as well so that triggers/ views and stored procedure can make
>use of the new data.  Here is my script that I run on a nightly basis.

>CREATE PROCEDURE reindex_all_tables
>AS


>DECLARE tnames_cursor CURSOR FOR SELECT name FROM sysobjects
>    WHERE type = 'U'
>OPEN tnames_cursor


>BEGIN
>    /*

>            -1, or 0, all three cases must be tested. In this case, if
>            a table has been dropped since the time this stored
>            procedure was executed, it will be skipped. A
>            successful fetch (0) will cause the DBCC within the
>            BEGIN..END loop to execute.
>    */

>    BEGIN
>            PRINT " "



>            PRINT " "


>    END

>END
>PRINT " "
>PRINT " "

>*************"

>PRINT " "
>PRINT "DBCC has been run against all user-defined tables."
>DEALLOCATE tnames_cursor

>--
>I hope this was of use.

>Steve Robinson




Sat, 29 Apr 2000 03:00:00 GMT
 Why do Table Indexes Need Rebuilding?


Quote:

>We have a 16-bit Windows ODBC-based application which builds and
>maintains a database.  The primary database we use is SQL Server
>(6.0/6.5).  The problem we are having is that as our databases grow,
>the performance (SELECT, UPDATE, etc.) declines rapidly.  We find,
>however, that by programmatically dropping and recreating our assorted
>table indexes (or rebuilding them through Enterprise Manager), we are
>able to regain the performance we expect.  I am wondering why this
>process is necessary, though.  Isn't it the database's job to maintain
>efficient indexes?

>This has become a major issue as our customers with larger databases
>(millions of records) are complaining not only of poor performance,
>but of the long time necessary to rebuild the table indexes.

>A few items of note:

>All tables and indexes are created programmatically.

>We are using very generic DDL to create the indexes, and use no SQL
>Server-specific features, such as clustered indexes.

>The number of indexes per table ranges from 1 to 3.  We have roughly
>20 tables in a typical application.

>We have *no* primary or foreign keys defined right now.  I'm wondering
>if this might be a cause of concern.

>I would appreciate any help!

>Thanks,
>Brent Southard

Let me add the following new information to my original post...

The performance problem is most noticable for some of our customers
who are adding many records every day -- up to 40,000 in a single
table.

Occasionally, approximately every few days, these customers will
archive some of their data.  This is accomplished by copying a
selected set of records from various tables into tables of the same
structure in another database.  After this data has been 'archived',
the records in the primary database are deleted.

Apparently, some customers are reporting that the performance problems
occur immediately after this deletion process.  It almost appears that
the database is not using the indexes at that point.  A query which
would usually take a matter of seconds may take 20 minutes or so!  The
indexes are certainly still there, but they are either *extremely*
inefficient or are being bypassed altogether.

I hope this additional information may help some of you shed some
light on the problem.

Thanks again,
Brent



Sat, 29 Apr 2000 03:00:00 GMT
 Why do Table Indexes Need Rebuilding?

Quote:
> Apparently, some customers are reporting that the performance problems
> occur immediately after this deletion process.  It almost appears that
> the database is not using the indexes at that point.  A query which
> would usually take a matter of seconds may take 20 minutes or so!  The
> indexes are certainly still there, but they are either *extremely*
> inefficient or are being bypassed altogether.

Does the problem occur immediately after the deletion is started, continue
for some period of time, then go back to normal?  That is, at some point
after the deletion, does the performance return to what it was before,
without the index rebuild?  

Scott



Sat, 29 Apr 2000 03:00:00 GMT
 Why do Table Indexes Need Rebuilding?

Brent,

The reason why you need to rebuild your indexes is because over time as
data is inserted and removed there is a possability things will become
fragmented and the original fill factor will no longer exist (it is not
maintained by SQL server after the original build).  SQL Server does
reclaim space but only to a certain level.  Also as data is added the
statistics / density becomes out of date and may be misleading.  When
stored procedures pass varaibles it is this density value the optimiser
uses to look at the viability of the index - if it is out of date the wrong
execution plan may be used.  
You can run update statistics on the tables so that the latter problem is
removed but I tend to run dbcc reindex on all the tables in the database
(this depends on your time window though).  Dont forget to run sp_recompile
on the table as well so that triggers/ views and stored procedure can make
use of the new data.  Here is my script that I run on a nightly basis.

CREATE PROCEDURE reindex_all_tables
AS


DECLARE tnames_cursor CURSOR FOR SELECT name FROM sysobjects
        WHERE type = 'U'
OPEN tnames_cursor


BEGIN
        /*

                -1, or 0, all three cases must be tested. In this case, if
                a table has been dropped since the time this stored
                procedure was executed, it will be skipped. A
                successful fetch (0) will cause the DBCC within the
                BEGIN..END loop to execute.
        */

        BEGIN
                PRINT " "



                PRINT " "


        END

END
PRINT " "
PRINT " "

*************"

PRINT " "
PRINT "DBCC has been run against all user-defined tables."
DEALLOCATE tnames_cursor

--
I hope this was of use.

Steve Robinson



Sat, 29 Apr 2000 03:00:00 GMT
 Why do Table Indexes Need Rebuilding?

Scott:

Not that I've heard of... why?

Thanks,
Brent

On Tue, 11 Nov 1997 09:07:36 -0800, "Scott Nichol"

Quote:

>> Apparently, some customers are reporting that the performance problems
>> occur immediately after this deletion process.  It almost appears that
>> the database is not using the indexes at that point.  A query which
>> would usually take a matter of seconds may take 20 minutes or so!  The
>> indexes are certainly still there, but they are either *extremely*
>> inefficient or are being bypassed altogether.

>Does the problem occur immediately after the deletion is started, continue
>for some period of time, then go back to normal?  That is, at some point
>after the deletion, does the performance return to what it was before,
>without the index rebuild?  

>Scott



Sat, 29 Apr 2000 03:00:00 GMT
 
 [ 6 post ] 

 Relevant Pages 

1. PDox 5win-.dbTable rebuild blows away indexes-why?

2. Why do you have to rebuild indexes?

3. PDox 5win-.dbTable rebuild blows away indexes-why?

4. Difference in ranking results from a full index rebuild verses an incremental rebuild

5. rebuild or not rebuild index file, when ?, how?

6. Need Help With Index Rebuild

7. Is there a need to rebuild indexes after bulk insert operation

8. Need procedure to rebuild a system index

9. Need Script in order to rebuild all indexes

10. Need 2 rebuild indexes??

11. Indexes need to be rebuilt.

12. help figuring out why query doing full table scan


 
Powered by phpBB® Forum Software