Concurrency Discussion

 wrote:
> One of our application teams has a big problem at the moment
> with concurrent access to a table triggering lots of locking problems.
>
> We are running Ingres 6.4/06 on DYNIX/ptx.
>
> The table has about 43000 rows, the rows are only 33 bytes wide, table
> is re-created every night and re-modified then also. The vast majority
> of the access involves deleting from the table, followed by updates (not of
> the key) and then an occassional insert.
>
> The default maxlocks was being used so I suggested that they set maxlocks
> upto about 150 (we have a max of 200 locks per transaction) and this seemed
> to help but then things were as bad as before.
>
> I then suggested that they change the fillfactor to attempt to mimic
> row-level locking, by forcing one row per page, (the size of the table is
> not an issue) again this seemed to sort things but again it's all gone 
> pear-shaped.
>
> I'm wondering if a hash secondary index to even a heap table may help them
> but  thoughts would be welcome.
>
> The table is currently an isam, one rather strange thing is happening -
> even though the table is modified every night there is significant overflow
> even before any access is made to it (17k pages out of 43k) what is
> causing that?
>
> An additional question related to the problem concerns another table -
> given a page number (from the error log) is it possible to identify the
> rows on that page?
>
> regards
>
> Jon


From: Chip Nickolett 
Subject: Re: Concurrency Question.

Jon,

Is access to this data made via full key?  Is one row accessed
at a time or a set of rows?  I will frequently use HASH tables to
improve concurrency (setting minpages fairly high, but nowhere
near one row per page).  This does not work well for set
processing queries, even if each member of the main set of
data has a one-to-one correlation with the hash table (the HASH
table will be scanned).  My initial thought is that a HASH
secondary index would not help much.

If you are accessing many rows at once in this table you may
want to look into clustering the data with a BTREE table.  The
primary structure would keep the correlated data physically
close together instead of being used for access.  A secondary
index for access is generally required when you do this.

Overflow on ISAM and HASH tables right after table modification
is not unusual and usually occurs when there is not sufficient
leading-edge granularity on the key.  Using a surrogate key will
often help this but adds another layer of complexity.

The locks per transaction and maxlocks values that you
stated are better than then default but still may be too
low.  I will typically set locks/tx at a value between 500-1000
(depending on the number of users) and set maxlocks
anywhere between 50-500 (depending on the type of
access, number of tables in a typical transaction, etc.).

You should also explicitly set "level = page" in the set
lockmode statement.  If this is not explicitly set Ingres
will acquire a table level lock if the optimizer believes
that more than maxlocks locks will be acquired.  By
explicitly setting this you force it to work the way that
most people believe that it works by default (i.e.,
keep acquiring page-level locks until you run out
or hit maxlocks).

Finally, this could be a transaction or query design
issue.  Look at the printqry output from the transaction
and the QEPs (which can be done in a single, simple
step by exporting ING_SET="set printqry; set qep;"
and setting II_EMBED_SET="printtrace;").  Even if
this does not lead to a programmatic solution to your
problem it may provide more insight into the true
nature of the problem.

And yes, you can determine the data on a page by
dividing the TID by 512 (i.e., 'select tid/512 as page_num').

Chip Nickolett           ChipN@Comp-Soln.com
Comprehensive Consulting Solutions, Inc.   (www.Comp-Soln.com)
Phone:  262-544-9954     Fax:  262-544-1236

President - North American Ingres Users Association (NAIUA)
Ingres Q & A
Back to William's Home Page

© William Yuan 2000

Email William