Large buffer reads, updates, writes - System 11 
Author Message
 Large buffer reads, updates, writes - System 11

I had a passing thought that I wanted to pose on this forum for discussion.

Consider the following:

I have a record coming into cache that will be updated.  The cache has a
buffer pool set to 16K, therefore, the entire extent that the record I want
to get at will be brought into a buffer.

  - Will the entire buffer be written to log before the update ?
  - Will the entire buffer be written to disk when the buffer hits the wash
marker, all 16 K ?
  - What happens if the record to be updated causes the data to flow onto
another extent ?  Does another extent come into a buffer, the overflow
written somewhere in its boundary, then the 32K is written to disk ?  Are
both buffers logged ?

:-) does it get any better than this?

Lee Grimes
HBO & Company



Thu, 09 Dec 1999 03:00:00 GMT
 Large buffer reads, updates, writes - System 11

Lee,

The log is written out as individual records, there usually being many
records per log page; each record includes only the data relevant to
that operation.  Operations recorded in the log are generally smaller in
scope than a single SQL statement; for instance, a one-row SQL INSERT
might generate in the log a Data Page Insert record, several Index
Insert records, and some Page Allocate records in the log.  There is
also a Begin and End record in the log for every transaction (and since
even a SQL statement outside of a BEGIN TRAN/COMMIT/ROLLBACK block is
still considered an "implicit transaction" by the server, it is logged
as such with the Begin/End records).

So to answer your first question, the entire 16K buffer in your scenario
would not be written to the log, but rather only the data affected by
that operation (such as the column values for an inserted row).

The other questions would take a Sybase engineer to answer; it depends
on whether the server tracks modified pages individually within a large
I/O buffer.  The server might write out all 16K or 32K, or it might
write out only the affected pages.  There are pros and cons to either
method, and I would expect Sybase has researched it to find the best
way.

If the updated data required another page, the server _might_ read in
another 16K block or (if only smaller buffers were available) it could
read in a smaller block.  Or, it's possible that a new page could be
allocated (this allocation would create another, small, entry in the
log) and initialized in memory, without having to be read from disk
first.  Again, a Sybase engineer could confirm whether the server does
this or not--I'm just making an "educated guess."

As far as I know (again, guessing at Sybase internals) the two 16K
blocks in your scenario would not be kept together in the cache, so they
might not reach the wash marker together.  The logging is done based on
the affected row, column, etc. and not based on the buffer size or
location (LRU vs. MRU) in the cache.

Hope this helps.

-----------------------------------------------------------

White Sands Technology, Inc.    http://www.whitesands.com
818-340-3300 ext. 221           818-340-3301 fax

DBCCs running too slow? Need to do some I/O Load Balancing?
Got Table/Index Fragmentation? Want to speed up those queries?

ProActive DBA will determine the current fragmentation levels
of your tables/indexes, calculate the exact amount of space
(in bytes) that your tables/indexes occupy, and allow you to
graphically view the pages/extents that your tables/indexes occupy.

Check out ProActive DBA - ProActive and Maintenance Tuning for SQL Server
-----------------------------------------------------------



Quote:
> I had a passing thought that I wanted to pose on this forum for
discussion.

> Consider the following:

> I have a record coming into cache that will be updated.  The cache has a
> buffer pool set to 16K, therefore, the entire extent that the record I
want
> to get at will be brought into a buffer.

>   - Will the entire buffer be written to log before the update ?
>   - Will the entire buffer be written to disk when the buffer hits the
wash
> marker, all 16 K ?
>   - What happens if the record to be updated causes the data to flow onto
> another extent ?  Does another extent come into a buffer, the overflow
> written somewhere in its boundary, then the 32K is written to disk ?  Are
> both buffers logged ?

> :-) does it get any better than this?

> Lee Grimes
> HBO & Company




Sat, 11 Dec 1999 03:00:00 GMT
 Large buffer reads, updates, writes - System 11

Quote:

>   - Will the entire buffer be written to log before the update ?

David answered this already...

Quote:
>   - Will the entire buffer be written to disk when the buffer hits the wash
> marker, all 16 K ?

My guess would be a yes.  This is based on the object being treated
as an entire unit when in the pool mentioned.

Can someone from Sybase "who is in the know" correct me if I'm wrong?

Quote:
>   - What happens if the record to be updated causes the data to flow onto
> another extent ?  

Allocate another extent... allocation is probably done in the 2K
pool...  

Quote:
> Does another extent come into a buffer, the overflow
> written somewhere in its boundary, then the 32K is written to disk
> ?  

The extent is allocated to the table and the allocation as well as
the new extent are part of the transaction.  Thus, they are all
written to the log.  On a checkpoint, the newly allocated extent as
well as the dirty one will get sync'd to the data portion of the
database.

Quote:
> Are both buffers logged ?

It's not as much as buffers being logged but the "changes" are
logged.

Here's an SGI Technical Marketing tip (:-)):

Background:
-----------
write-ahead log protocol is the Sybase RDBMS' friend... as well as
the other RDBMS'...

            For you CJ Date-heads check out volume II, page 16-17
            (well, those are the pages in my old) An Introduction to
            Database Systems.

What this essentially says is that in order to improve speed and
throughput, changes are written only to the log.  The data portion of
the database is left out of sync.  An asynchronous event, known as a
checkpoint, is responsible for taking the dirty memory buffers (the
buffer cache) and synching them out to disk.  In the event of a
failure, the log is used to recover the database.  The last
checkpoint is used as a reference point to get the database back in
order.  Essentially, the checkpoing process shortens the recovery
cycle on a failure (like someone doing a kill -9 to the SQL Server).

From the above we gather that all changes go to a db's log.  The log
is inherently *heavy* sequentially write.  Compared to the data
portion of the RDBMS which is *typically* randomly accessed (not
always of course...).

To answer you question, the changes within the two buffers are logged
to disk.  I must be in marketing... can't give a straight answer
anymore... someone shoot me!

Quote:
> :-) does it get any better than this?

It doesn't...  

signed, The Nerd Family.
--

--------------+-----------------+--------------------------------------------



Mon, 13 Dec 1999 03:00:00 GMT
 
 [ 3 post ] 

 Relevant Pages 

1. System 11 problems with large sql batches

2. Sybase System 11 update problems and transactions

3. Dynamic SQL in Stored Procedures in System 11.5-11.9

4. Text Data Type: Writing and Reading Large Strings

5. Does AP/SCO handle read/add/write large records better than D3/Linux

6. system error code 127 while loading ODBC Sybase system 11

7. System 10 vs System 11

8. System 10 vs System 11 in performance

9. System 10 vs System 11

10. System 10 Drivers to access System 11 server

11. dump/load doesn't work from system 10 to system 11

12. System 10 Vs System 11


 
Powered by phpBB® Forum Software