Quote:
>> I have an instance running a batch process, it's fairly busy (steady
>> thirty megabytes per second redo generated)
>30 MB/second? Busy is a bit of an understatement... 30 Mb/sec = 1.8
>gigs/minute = 108 gigs/hour = 2.6 terabytes of redo a day? That's
>pretty impressive - what's your backup strategy like?
Fortunately, that isn't a problem for me. This isn't a production
instance, I try to stay away from those. :-)
The particular application isn't running all the time, and the others
aren't quite so enthusiastic about generating redo. The shortish run
I made took 1.5 hours and got through 5x30GB log files. I need to do
a run at least 4 times as long in the next few days, and hope to spend
less than all day doing it.
Seriously though, it does worry me how our customer's DBAs could cope with
backup of some of the redo rates that I see in benchmarks, especially
given the predicted increases in volume. While it's not quite up to
30MB/s yet, it's at least in the same order of magnitude. If anybody's
got experience of backing up instances that generate a few hundred GB
per day redo, I'd be interested to hear.
I imagine the answer involves giving a lot of money to tape-library
vendors. With a load of log groups (say 9), have a 6 drive DLT library,
30 MB/s / 6 drives => 5MB/s per drive - just about plausible I guess
given Quantum claim 6MB/s streaming for DLT8000. That's still 86 tapes
to shuffle through in a day. ...and then there's data file backup.
--
Andrew Mobbs - http://www.chiark.greenend.org.uk/~andrewm/