can a transaction block itself? 
Author Message
 can a transaction block itself?

[ posted by a remailer.  if replying by mail, use addresses below ]

Can a transaction block itself?  Rather, in the midst of a transaction,
can there be some combination of record insert/updates/delete into one
table so that the transaction will not be able to complete?  If in the
midst of a trans., a record is inserted that causes page splits, will
it be possible to add another record, say with the next key value, to
one of the new pages?  Can a transaction block itself if it is updating
or inserting dozens (50) rows at a time?**

I am dealing with a table that has one clustered index, a couple
nonclustered.  No keys are defined.  The index values are near each
other. That is, newly inserted record are assigned an ascending integer
on the clustered index.  The non clustered index fields are alpha
differing in the last letter or two,such as
JAEMGA,JAEMGB,JAEMGC,JAEMGD,

Note that I have not had any problems with such a table.  Insertions,
page splits, locking, and transaction blocking is only an issue for
multiple processes, the scenario where the end of the table is
being inserted, right?  Plus, you can BCP into a table with indexes
up.  That must cause splits, so plain old inserting will not cause
a process to block itself.  

** Transactions should be kept as small as necessary, right?  How
expensive is to begin and commit a transaction?  Is there any economy
to doing a batch of actions within the same transaction, i.e. two
unrelated inserts?  Maybe you save an i/o or two per insert by
batching?  I know sybase has write ahead trans logs, so deferring a
commit will not save any writes to the logs.  How expensive is it to
rollback a transaction?  Anyhow, batching would be done on a system
when there would be no contention.  I do not think there would be much
savings to batching updates/inserts, and if there were, it would not
be worth the effort to develop more complicated code.  What do you
think? I really want comments on this last question.

Your thoughts are appreciated,
Bill
--



Fri, 10 Dec 1999 03:00:00 GMT
 can a transaction block itself?

Quote:

> [ posted by a remailer.  if replying by mail, use addresses below ]

> Can a transaction block itself?  

No because the only time a block can occur is when another connection
has the required resource.

Quote:
> when there would be no contention.  I do not think there would be much
> savings to batching updates/inserts, and if there were, it would not
> be worth the effort to develop more complicated code.  What do you
> think? I really want comments on this last question.

If you inserted 100 rows one at a time within a transaction it takes
longer than slamming 100 rows within a transaction.  For one, there's
less information to log.
--

--------------+-----------------+--------------------------------------------



Mon, 13 Dec 1999 03:00:00 GMT
 can a transaction block itself?

[ posted from an unmailable address.  if replying by mail, use addresses below ]

The short questions are:
Are larger than necessary batch of records operations safe?
Are they worth the effort?

The longer version of same:
Can a transaction block itself?  Rather, in the midst of a transaction,
can there be some combination of record insert/updates/delete into one
table so that the transaction will not be able to complete?  If in the
midst of a trans., a record is inserted that causes page splits, will
it be possible to add another record, say with the next key value, to
one of the new pages?  Can a transaction block itself if it is updating
or inserting dozens (50) rows at a time?**

I am dealing with a table that has one clustered index, a couple
nonclustered.  No keys are defined.  The index values are near each
other. That is, newly inserted record are assigned an ascending integer
on the clustered index.  The non clustered index fields are alpha
differing in the last letter or two,such as
JAEMGA,JAEMGB,JAEMGC,JAEMGD,

Note that I have not had any problems with such a table.  Insertions,
page splits, locking, and transaction blocking is only an issue for
multiple processes, the scenario where the end of the table is
being inserted, right?  Plus, you can BCP into a table with indexes
up.  That must cause splits, so plain old inserting will not cause
a process to block itself.  

** Transactions should be kept as small as necessary, right?  How
expensive is to begin and commit a transaction?  Is there any economy
to doing a batch of actions within the same transaction, i.e. two
unrelated inserts?  Maybe you save an i/o or two per insert by
batching?  I know sybase has write ahead trans logs, so deferring a
commit will not save any writes to the logs.  How expensive is it to
rollback a transaction?  Anyhow, batching would be done on a system
when there would be no contention.  I do not think there would be much
savings to batching updates/inserts, and if there were, it would not
be worth the effort to develop more complicated code.  What do you
think? I really want comments on this last question.

Your thoughts are appreciated,
Bill
--



Tue, 14 Dec 1999 03:00:00 GMT
 can a transaction block itself?

Quote:

>The short questions are:
>Are larger than necessary batch of records operations safe?
>Are they worth the effort?

>The longer version of same:
>Can a transaction block itself?  Rather, in the midst of a transaction,
>can there be some combination of record insert/updates/delete into one
>table so that the transaction will not be able to complete?  If in the
>midst of a trans., a record is inserted that causes page splits, will
>it be possible to add another record, say with the next key value, to
>one of the new pages?  Can a transaction block itself if it is updating
>or inserting dozens (50) rows at a time?**

>I am dealing with a table that has one clustered index, a couple
>nonclustered.  No keys are defined.  The index values are near each
>other. That is, newly inserted record are assigned an ascending integer
>on the clustered index.  The non clustered index fields are alpha
>differing in the last letter or two,such as
>JAEMGA,JAEMGB,JAEMGC,JAEMGD,

>Note that I have not had any problems with such a table.  Insertions,
>page splits, locking, and transaction blocking is only an issue for
>multiple processes, the scenario where the end of the table is
>being inserted, right?  Plus, you can BCP into a table with indexes
>up.  That must cause splits, so plain old inserting will not cause
>a process to block itself.  

Right, if a process holds an exclusive lock on a page because it has
updated that page, it can continue to change data on the page, and
won't block itself. But a process that holds an exclusive lock
will block all other transactions that need to read or write
that page (except dirty read transactions).

Quote:
>** Transactions should be kept as small as necessary, right?  

Well, there's _small_ and there's _short_. You could have a small
transaction that allows some user-interaction in the middle, and have a
very _long_ transaction.

The major reason for the "short transaction" guideline is multi-user
considerations -- you keep the time between begin tran (or, really,
acquiring the first lock that is going to be held ) and the transaction
end short, to avoid blocking other users.

Using your 50-row insert to a table with 3 indexes as an example,
you'll start acquiring exclusive locks on each of the changed data and
index pages with the first insert, and continue to acquire locks until
the transaction completes. If a page split of a leaf-level index page
is involved, you'll also lock the 'parent' page. For as long as the
transaction lasts, no other users will be able to read those data
pages, and worse, they won't be able to read the index pages, either.
If some of your inserts are temporarily blocked by other user's
transactions, your inserts will hold the locks even longer. So, it
depends on whether you're doing these inserts at times when you have
the tables pretty much to yourself, or whether you're going to be
causing contention.

Quote:
>How expensive is to begin and commit a transaction?  Is there any economy
>to doing a batch of actions within the same transaction, i.e. two
>unrelated inserts?  Maybe you save an i/o or two per insert by
>batching?  I know sybase has write ahead trans logs, so deferring a
>commit will not save any writes to the logs.  

11.0.x is somewhat different than 10.0x and earlier. Each user gets a
'user log cache' and transaction log records get stored there until a
commit (or some other action, like the user log cache gets full,
a change of DB, or a system event) flushes the log. So the model is a
bit different, making for fewer log writes.

Quote:
>How expensive is it to rollback a transaction?  

Let's see: begin transaction, read the needed the data and index pages
into the cache, make the changes on the copies of these pages in cache,
write the log records, read the log records at the rollback, undo the
changes to the pages in cache, record the rollback event in the log.
And, if the data or log pages are flushed from cache between the begin
tran and the rollback, add extra I/O to re-read the pages.

Quote:
>Anyhow, batching would be done on a system
>when there would be no contention.

So, is your main concern speeding up the insert process? It should
be fairly easy to compare the behavior on your system, using
set statistics time on or other monitoring tools to check how
long the process runs, the number of log I/Os, etc.

Quote:
>I do not think there would be much
>savings to batching updates/inserts, and if there were, it would not
>be worth the effort to develop more complicated code.  What do you
>think? I really want comments on this last question.

Seems like transactions should be used to group things that belong
together for logical business reasons (inserting an invoice master
record and it lineitems), and that if you develop exceptions for
performance reasons, you need to both note that in your code, and check
that the performance reasons remain valid as the server and your
applications change.


Fri, 17 Dec 1999 03:00:00 GMT
 
 [ 4 post ] 

 Relevant Pages 

1. Task Blocking Itself

2. detection of long running transactions and blocking transactions

3. Watcom transaction log bigger than database itself

4. blocking, blocking, blocking agghh!

5. blocks, blocks, and more blocks

6. Transaction blocking other requests ??

7. Transaction Blocking, but why?

8. Transaction blocking other user's requests ??

9. The Transactions are Blocked!

10. The Transactions are Blocked!

11. transactions, blocking

12. How to block the transaction?


 
Powered by phpBB® Forum Software