Transaction locks at a table level

gmg@clark.net (Gary M. Gettier) writes:
> Using Ingres Windows 4GL...
> 
> No matter what we do, no matter how small the query, everytime one of our
> Ingres tables is locked for an update, we notice, using IPM, that the table
> being updated is locked at the table level.
> 
> We have tried setting maxlocks to 500 for each table in question, and at
> the session level. Any help would be appreciated.
> 
> Seeya,                                                          gmg@clark.net
> Gary
-- 

Gary:

Locks are one of the most powerful (and hence most dangerous) aspects of
INGRES.  The optimizer on the server decides what kind of a lock to take
based on the query that you give it, the statistics collected by optimizedb,
and the locking instructions that you have specified in the SET LOCKMODE 
statements.  

The optimizer will always _try_ to take the lowest level lock that it can, 
but you may know things about the data that INGRES does not.  You may know, 
for example that a column is a unique key value - therefore a maximum of one 
page should be locked for each record updated.  But if there is no index on 
the column in question, INGRES will assume that there are duplicate keys, 
and take a table level lock.  HEAP tables, which cannot be keyed, will ALWAYS 
be locked at the table level, because the server will HAVE to do a table scan 
to ensure that it updates all rows that your query specifies.

If the joins that you are doing are very complex (3+ tables), then the
optimizer will decide which piece of the query to process first, estimate
a result set based on statistical data, table structures, and some basic
assumptions, then decide which piece to process next and so on.  You can
use the SET QEP command in ISQL to see the query plans that INGRES
generates.  The DBA guide explains how to decode them.

I suspect that you are updating unkeyed tables, or that you are not using
the most discriminating part of the key (e.g. using colB, colC on a table
keyed on colA, colB, colC).  This will cause the optimizer to grab a table
level lock.

If I'm completely off base here (wouldn't be the first time), try posting an
example query and the structures of the tables involved (maybe even a QEP).
We'll take another stab.

Good Luck,

--Kent
-- 
Kent Smith              |   (617) 246-1630    |  Oh lord that takest away the 
Consultant              |   jks@tpghq.com     |  bugs of the program, 
The Palmer Group        |   uunet!tpghq!jks   |  Have mercy upon us.


Gary,

You are probably running out of transaction locks specified for the
installation.  The default is 80. i.e. the maximum number of locks
each active transaction can have.  When the number is exceeded, the
page locks are escalated to table level lock.  

MAXLOCK is the maximum locks each query statement can have before it
tries to escalate to table level lock.  Obviously, setting MAXLOCK to 
500 won't help in your situation as the whole transaction is limited to 
80 locks per transaction.

You will need to run iibuild to change the max transaction locks.

Yaxiong Lin
lin@mayo.edu
Information Services
Mayo Medical Foundation
Rochester, MN 55901




In a previous article, mikeg@sequent.com (Mike Glendinning) wrote:

[material deleted - a reply to a "table lock" problem]

>Take your queries into the terminal monitor and have a look at the QEPs.  If
>the query optimizer thinks that more than 10 pages will be hit in the table
>(look at DIO estimates) then it will force the use of a table level lock.
>
>Note that this value of 10 is *not* affected by "set lockmode" statements,
>as the documentation implies.  It's a bug, and hopefully will be fixed
>someday.  For big tables, it's a killer!

This last is news to me.  Are you certain about this??  Wouldn't 'sysmod'
and 'optimizedb' affect the behavior?  (Or is this limit applied AFTER the
statistics, etc. are used?)

Has anyone else seen similar behavior?

>-Mike Glendinning, Sequent UK (mikeg@sequent.com).
-- 
Duncan MacGregor | aa735@freenet.carleton.ca | duncanm@micor.ocunix.on.ca


>In a previous article, mikeg@sequent.com (Mike Glendinning) wrote:
>>Note that this value of 10 is *not* affected by "set lockmode" statements,
>>as the documentation implies.  It's a bug, and hopefully will be fixed
>>someday.  For big tables, it's a killer!
>
>This last is news to me.  Are you certain about this??  Wouldn't 'sysmod'
>and 'optimizedb' affect the behavior?  (Or is this limit applied AFTER the
>statistics, etc. are used?)

Yes, I'm sure.  The 'sysmod' (for system catalogs) and 'optimizedb' utilities
might certainly affect the behaviour of the optimizer, but they won't change
the restriction we're talking about here.

The way I usually describe the way Ingres determines the locking strategy for
a query is as follows.  I make a few simplifications, but for the most part,
this description is accurate enough to work from.

There is a four level hierarchy from where Ingres will take information about
what locking strategy (page or table level) will be employed by a query.

	1. The lowest level is the "built-in system default" which uses
	page level locking until 10 pages have been accumulated, when it
	escalates to table level.

	2. The user can specify system level lockmode statements that will
	override the built-in defaults.  For example "set lockmode session
	where maxlocks=150".  This will cause more page locks to be taken
	before escalation to table level locks occurs.

	3. The query optimizer facility will direct either page or table
	level locking, depending on how many pages it thinks are going to
	be referenced in the table by the query.  If the optimizer believes
	that more than 10 pages will be touched, then table locks are used
	instead of page level locks.  This limit is hard-wired as 10 and
	cannot be changed (well, as of 6.4/04 and 6.5).  The optimizer
	uses the available statistics in determining how many pages will
	be referenced (you can look at the DIO estimates in the QEP listing).
	If page locks are chosen, then escalation to table level locks will
	occur as usual if "maxlocks" is hit (from (2) or (1) in the hierarchy).

	4. The user can specify page or table level lockmode statements
	that will override all other instructions (including the optimizer).
	This is the highest level in the hierarchy.  For example, "set
	lockmode session where level=page,maxlocks=150" will *always* force
	page locking, until 150 locks have been taken, when escalation to
	table locks will occur.

So you can see that the "standard" form of lockmode statement (system level,
and (2) in the hierarchy above) will only have a useful effect in two
circumstances.  Either the optimizer is not used (non-enumerated queries,
such as simple key lookups in a single table) or the optimizer gets it badly
wrong and thinks fewer than 10 pages will be referenced in the table when in
actual fact many more are.  You normally *do not* want this second case to
occur!

Be careful with lockmode statements including "level=page" or "level=table"
as these override the query optimizer!  For some tables, you need to
explicitly reset the lock modes appropriately before individual SQL
transactions.

To re-iterate, the problem is that the optimizer always uses the value of
10 pages in determining whether to use page or table level locks.  This is
the bug.  What should happen is that the optimizer uses the available
"maxlocks" information from (2) and (1) in the hierarchy.  Unfortunately,
this turns out to be a bit tricky to implement, but in my opinion, the
change would be well worth making.

Any Ingres site with large (ish) tables and many concurrent users will have
seen this problem.  I seem to come across them all the time...

-Mike Glendinning, Sequent UK (mikeg@sequent.com).
Ingres Q & A
To William's Home Page

© William Yuan 2000

Email William