Large table update/vacuum PLEASE HELP! 
Author Message
 Large table update/vacuum PLEASE HELP!

Hi, everybody!

This is the continuation of my yesterday's trouble, having updated a large table
Please see my previous message - Alter/update large tables - VERRY annoying behaviour

The only responses I got to that was, pretty much how annoying I was, posting to several
mailing lists at once (I am not doing that any more) and posting messages in HTML (I hope,
that is fixed now too)...

Well... EVEN THOUGH I posted to different lists, I got no response! Is it because I am asking
something stupid? Is my English lousy, so that people just don't understand what I am talking about?
Or am I just not welcome here at all?

I am sorry, if I sound too irritated... that's just because I am :-(
You see, I ran that vacuum command on that table... it took about 24 hours... AND STILL DID NOT FIX ANYTHING!

select * from a limit 1;

Takes about 30 minutes and I have no idea what it is thinking about for so long!

If anyone has anyu idea at all what could be the problem, PLEASE HELP!

Thanks a lot!

Dima



Sat, 02 Oct 2004 23:20:38 GMT
 Large table update/vacuum PLEASE HELP!


Quote:
>select * from a limit 1;

>Takes about 30 minutes and I have no idea what it is thinking about for so
>long!

>If anyone has anyu idea at all what could be the problem, PLEASE HELP!

If a is still something like:

create table a
(
    id int primary key,
    some_data int
);

Try:

vacuum analyze;
explain select * from a order by id limit 1;
select * from a order by id limit 1;

Hopefully the results are good.

Good luck,
Link.

---------------------------(end of broadcast)---------------------------
TIP 6: Have you searched our list archives?

http://archives.postgresql.org



Sat, 02 Oct 2004 23:54:57 GMT
 Large table update/vacuum PLEASE HELP!

Quote:


>> select * from a limit 1;

>> Takes about 30 minutes and I have no idea what it is thinking about
>> for so long!

>> If anyone has anyu idea at all what could be the problem, PLEASE HELP!

> If a is still something like:

> create table a
> (
>    id int primary key,
>    some_data int
> );

Well... Not quite.
The whole problem started when I merged a and b tables together - so,
now a looks like:
create table a
(
    id int primary key,
    some_data int,
    some_other_data int
);

(
I added the last column with alter table and populated it with
update a set some_other_data from b where a.id=b.id;
That's when my nightmare started
)

Quote:

> Try:

> vacuum analyze;

Yeah... I did that yesterday... It took about 24 hours (!) to run... And
I don't see any difference in the behaviour :-(

Quote:

> explain select * from a order by id limit 1;

Aha... This is an interesting thing to try. Thanks.
Unfortunately, I can't try that right now either! :-(
I am running 'vacuum full' on that table (out of despare), and, despite
what I have repeatedly heard about
vaccums in 7.2, my 'explain' command seems to be hopelessly waiting on a
lock, created by that vacuum
:-(
Is it supposed to be that way.

Quote:

> select * from a order by id limit 1;

As I said, I could not check it right now... My understanding is that
you suggest that I force it to scan the index rather then the table
itself... This very well may help the immediate problem (once it's done
with the vacuum, which I expect some time tomorrow :-()...

BUT, the main issue is that I can't really do a sequentual scan on that
table (not in any reasonable time anyway) the way it is...
I am just used to thinking, that,  sequentual scan of a table is the
most basic operation I can imagine ... if my database is so screwed up
that even that doesn't work, I won't be able to make much use of it
anyway... :-(
Is that right?

Quote:
> Good luck,

Thanks! :-)

Dima

---------------------------(end of broadcast)---------------------------
TIP 6: Have you searched our list archives?

http://archives.postgresql.org



Sun, 03 Oct 2004 00:49:02 GMT
 Large table update/vacuum PLEASE HELP!
[snip]
Quote:

> > Try:

> > vacuum analyze;

> Yeah... I did that yesterday... It took about 24 hours (!) to run... And
> I don't see any difference in the behaviour :-(

> > explain select * from a order by id limit 1;

> Aha... This is an interesting thing to try. Thanks.
> Unfortunately, I can't try that right now either! :-(
> I am running 'vacuum full' on that table (out of despare), and, despite

[snip]

Just to be sure we all understand, you ran
  vacuum analyze;

you didn't just run vacuum or vacuum full

the analyze part is important; it creates statistics for the query
planner

--

Tom Jenkins
Development InfoStructure
http://www.devis.com

---------------------------(end of broadcast)---------------------------
TIP 3: if posting/reading through Usenet, please send an appropriate

message can get through to the mailing list cleanly



Sun, 03 Oct 2004 04:34:09 GMT
 Large table update/vacuum PLEASE HELP!

Quote:

>Unfortunately, yes.  VACUUM FULL is effectively the 7.1 and earlier
>vacuum.  Normal vacuum wouldn't have helped you in this case since it
>wouldn't compress the table (IIRC it only moves tuples within a block
>rather than between them, so all the dead blocks at the beginning are
>still there).

Aha! This (finally) gives me some hope! If I read you correctly - are
you saying, that, once my 'vacuum full' is finished, I'm finally back in
business?
Thank you very much!

If you are still not tired of me, could you also give me an idea on the
extent of this problem, while you are at it? I mean, how bad it really is?
I will never update the whole table from, now on (I realize now, that
it's much cheaper to just recreate it)...
But in general, if I put this database into production, I am going to
routinely update about 10-15% of all the rows in that table every
month... How bad it is?
 Am I going to need a 'vacuum full' after every update? Or how often
would it be practical to do that?

Also, do you have any idea, why does vacuum take me so long (24 hours
sounds a little excessive, doesn't it)? I've got a decent machine (4
CPU, 750 MHz, 8 Gig of RAM), so hardware whouldn't be a problem...
Could it be something wrong with my configuration.

Here is my postgresql.conf (with all the comments removed - just the
stuff I changed from the default)...
Does anything look wrong to you here?

tcpip_socket = true
max_connections = 100
shared_buffers = 64000
max_locks_per_transaction = 640
wal_buffers = 80
sort_mem =    10240
wal_files = 64
checkpoint_segments = 20
checkpoint_timeout = 600
stats_command_string = true
stats_row_level = true
stats_block_level = true
deadlock_timeout = 300000

Thanks again for giving my hope back! :-)
I really appreciate your response!

Dima

---------------------------(end of broadcast)---------------------------
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



Sun, 03 Oct 2004 06:45:08 GMT
 Large table update/vacuum PLEASE HELP!

Quote:

> But in general, if I put this database into production, I am going to
> routinely update about 10-15% of all the rows in that table every
> month... How bad it is?
>  Am I going to need a 'vacuum full' after every update? Or how often
> would it be practical to do that?

I wouldn't recommend a VACUUM FULL at all.  Just do plain VACUUMs on
a regular basis, and accept the 10% or so storage overhead.

VACUUM FULL is good for the sort of situation where you've updated all
or most of the rows at one time, and now you have a factor-of-2 storage
overhead; you need to physically compact the table.  But the price of
doing that is high enough that I wouldn't do it to save 10-15%.

                        regards, tom lane

---------------------------(end of broadcast)---------------------------
TIP 4: Don't 'kill -9' the postmaster



Sun, 03 Oct 2004 07:39:14 GMT
 Large table update/vacuum PLEASE HELP!

Quote:

>I wouldn't recommend a VACUUM FULL at all.  Just do plain VACUUMs on
>a regular basis, and accept the 10% or so storage overhead.

>VACUUM FULL is good for the sort of situation where you've updated all
>or most of the rows at one time, and now you have a factor-of-2 storage
>overhead; you need to physically compact the table.  But the price of
>doing that is high enough that I wouldn't do it to save 10-15%.

>                    regards, tom lane

I am not worried about storage overhead at all at this point, but rather
about performance degradation when it
has to scan through all those dead tuples in the table and there are
LOTS of them :-(

Thanks!

Dima

---------------------------(end of broadcast)---------------------------
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



Sun, 03 Oct 2004 22:01:16 GMT
 Large table update/vacuum PLEASE HELP!

Quote:


> >I wouldn't recommend a VACUUM FULL at all.  Just do plain VACUUMs on
> >a regular basis, and accept the 10% or so storage overhead.

> >VACUUM FULL is good for the sort of situation where you've updated all
> >or most of the rows at one time, and now you have a factor-of-2 storage
> >overhead; you need to physically compact the table.  But the price of
> >doing that is high enough that I wouldn't do it to save 10-15%.

> >                       regards, tom lane

> I am not worried about storage overhead at all at this point, but rather
> about performance degradation when it
> has to scan through all those dead tuples in the table and there are
> LOTS of them :-(

In the 10% case, you should be within the realm where the table's steady
state size is around that much more with reasonable frequency normal
VACUUMs and an appropriately sized free space map.

---------------------------(end of broadcast)---------------------------
TIP 3: if posting/reading through Usenet, please send an appropriate

message can get through to the mailing list cleanly



Mon, 04 Oct 2004 00:35:52 GMT
 Large table update/vacuum PLEASE HELP!
Quote:

>In the 10% case, you should be within the realm where the table's steady
>state size is around that much more with reasonable frequency normal
>VACUUMs and an appropriately sized free space map.

Are you saying that, if I, say, update 1000 tuples today, and another
1000 tomorow, it will reuse the today's dead tuples, and not create new
ones, so that I end up with just 1000 of them, not 2000?

Just making sure...

Thanks a lot!

Dima

---------------------------(end of broadcast)---------------------------
TIP 4: Don't 'kill -9' the postmaster



Mon, 04 Oct 2004 00:43:25 GMT
 Large table update/vacuum PLEASE HELP!

Quote:

> >In the 10% case, you should be within the realm where the table's steady
> >state size is around that much more with reasonable frequency normal
> >VACUUMs and an appropriately sized free space map.

> Are you saying that, if I, say, update 1000 tuples today, and another
> 1000 tomorow, it will reuse the today's dead tuples, and not create new
> ones, so that I end up with just 1000 of them, not 2000?

> Just making sure...

The expectation is that if you update 1000 tuples today, do a normal
vacuum when no transaction is left that can see the old state of those
tuples, then update 1000 tuples tomorrow, it'll attempt to reuse as
much of that "dead" space as possible which may very well mean you
end up with 1200 of them say, but no less than 1000 and almost certainly
not 2000.

For 1000 that should work, for much larger numbers you may need to play
with settings to get an appropriate effect (you may see that as the number
updated grows in order of magnitude that the wasted space approaches 2x as
you the map of free space isn't large enough unless you up those
settings).

---------------------------(end of broadcast)---------------------------
TIP 4: Don't 'kill -9' the postmaster



Mon, 04 Oct 2004 00:52:49 GMT
 Large table update/vacuum PLEASE HELP!

Quote:

>For 1000 that should work, for much larger numbers you may need to play
>with settings to get an appropriate effect (you may see that as the number
>updated grows in order of magnitude that the wasted space approaches 2x as
>you the map of free space isn't large enough unless you up those
>settings).

I am sorry, I am afraid, I don't quite understand this. What exactly are
those settings I need to play with?
And what is this 'map of free space'?

Thanks a lot!

Dima

---------------------------(end of broadcast)---------------------------
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



Mon, 04 Oct 2004 01:01:44 GMT
 
 [ 11 post ] 

 Relevant Pages 

1. Errors while vacuuming large tables

2. Vacuum problem on large table

3. Large Cursor updating Large Table

4. help with very large tables please...

5. BDE creating large tables, URGENT!!,please help

6. paging through a very large table? please help

7. Update Large Table Help !!

8. A large update multiple tables!! Help

9. UPDATE on large table NOT updating

10. Need help updating UPDATE-ing large number of records

11. problem with a table update, please help

12. Update table based on a mach with a select / Please help


 
Powered by phpBB® Forum Software