Database Parameters

mikeg@sequent.com (Mike Glendinning) wrote:

>Scott Kelley (skelley@hscw254.es.hac.com) writes:

>> 	A consultant who was at our site some time ago made some suggestions
>> of things to add to 'rundbms.opt' to improve performance. When we tried to
>> implement these we ran into a problem.
>> 
>> Our Configuration
>> ---------------
>> 8 dbms servers  (6 iislaves/wb threads each. NO shared cache or fast
>> commit).
>> v6.4/04 Solaris 2.3 on a Sparc 1000 with 1.1GB of memory
>> 
>> The Changes
>> ----------
>> added the following to rundbms.opt
>> 
>> -dmf.cache_size 10000
>> -dmf.count_read_ahead 16
>> -dmf.size_read_ahead 16
>> -dmf.tcb_hash 512
>> -opf.active 10
>> -opf.memory 4000000
>> -quantum 50
>> 
>> The Problem
>> ----------
>>   Ingres quickly uses up all available locks (currently 20,000). It
>> usually uses about 2000-3000. I tried bumping the locks to 65,000
>> (the most you can have), and it still used them all.  Once all
>> locks are used the system quickly grinds to a halt.
>>   So if you have some thoughts on which of these (and WHY) I'd appreciate
>> hearing from you.

>Ah, so you moved from shared cache to private cache with 8 servers?  I'm
>not surprised you ran out of locks!  Who was the consultant!!??

>If you have multiple servers accessing the same database without using a
>shared cache, then Ingres must start using "cache locks" in order to
>guarantee integrity of the database.  Cache locks are taken by the DBMS
>servers themselves, in order to identify when other servers have changed
>particular database pages (that the server has cached).  It works as follows.

>Each page read into the cache by a server has a cache lock taken out on it.
>The lock contains a counter value which is stored along with the cached data
>page.  If a server updates the page, it increments the counter value in the
>lock after it writes the page to disk.  Before using a page from the cache,
>the server checks the current counter value in the lock with the value
>stored with the cache page.  If they are different, then another server
>must have updated the page on disk, and our cache copy must be discarded.

>Because of the way the locking system works, splitting locks by resource,
>the number of locks needed to handle cache locking may be more than you
>imagine.  If you have N servers, each with a cache size of S, then I
>think the boundaries are as follows.  If all servers access the same S
>pages in the database, then you will need S * (N + 1) locks just to
>handle the cache locking mechanism (that is, S resource locks and S * N
>cache locks).  If all servers access completely different pages, then you
>will need 2 * S * N locks (that is S * N resource locks and S * N cache
>locks).  Usually, you will be somewhere in the middle.

>      S * (N + 1) <= locks-you-need <= 2 * S * N

>These numbers will give you some bounds for the number of locks needed,
>although in practice it will depend very heavily on the kind of database
>access being performed.  You should be able to size your cache and number
>of servers accordingly, though.  The more servers you have, the smaller
>your cache will have to be.

>Oh, and don't forget the impact of the group buffers on the number of
>pages in the cache!

>Because there is only one cache and only one copy of each page in memory,
>cache locks are not needed with shared cache servers and the above does
>not apply.

>So that's why you ran out of locks.  What is really needed here, of course,
>is for Ingres to support many more than 65000 locks!

>-Mike Glendinning, Sequent Computer Systems Ltd. (mikeg@sequent.com).

Hi,

Let me just follow up the Guru Glendinning's comments by noting that,
hamfisted as your consultant's implementation may have been (8 private
cache servers with 10K page caches!), his/her motivation was spot on,
in that there is no doubt that if you *have* to have so many servers a
private cache solution will blow shared cache out of the water in
performance terms - provided you can make it work at all, of course!
So I would not just revert despairingly to shared cache, without
considering the options.  

First point - do you need 8 iidbms servers?  You haven't said how many
CPUs you've got, nor (more importantly) how many concurrently active
queries you expect the servers to handle.  Servers tend to give up the
ghost once you ask them to cope with more than a dozen or so threads,
in my experience.  You often encounter a simplistic approach, whereby
one counts the CPUs, subtracts one or two for luck, and that's your
server count.  These days, that is unacceptable;  the SMP boys have
got things working sufficiently well that if one really wants to have
three or four iidbms servers per CPU (as one well might with PA-RISC,
for example), then you can, without being worried about the SMP
performance graph, and it must be said that your quantum of 50
suggests that you have some meaty CPUs.  So:-

    consider the overall power of your system;  

    consider how much work you can expect each iidbms to handle;

    and from that derive your server count.  

Second, if you reckon you need 8 servers, do you really need a 20
MByte page cache for each server?  That's a pretty big cache!  How
much 'validation' data have you got?  In the nature of things,
_popular_ validation data will tend to become resident in page cache,
and so it should.  A page cache which is substantially larger than one
needed to hold this data, however, is inefficient for two reasons:-

    it will (if private) consume locks pointlessly;  

    it causes a buffer flush performance blip at a consistency point.


Have you tried reducing the size of the buffer cache?  I would expect
the maximum workable cache with 8 private iidbms servers to be
something like 3.5K pages.  If your database volumetrics allow it,
reduce the cache size, and devote the freed memory to UNIX buffer pool
(unless you're using II_DIRECT_IO, of course!).  Even if the database
volumetrics don't seem to allow it, give it a whirl - you may (almost
certainly will) still decisively outperform the shared cache
configuration.  

Third, how many databases have you got?  If you have more than one,
you might want to consider running multiple Ingres installations.
This makes life moderately tedious from an operational perspective,
but does offer a way round the 65K lock limit.  

As Mike notes, the real problem is the absurd 65K limit.  I have seen
no sign of this being lifted in OpenIngres, despite rumours - can
anyone comment?  I can also see no reason why Ingres has to use the
'application' lock pool for cache locks, but there you go.  

----
Dave Cunningham
davec@junius.demon.co.uk
----
CAT Consulting
+44 1902 839220
Ingres Q & A
To William's Home Page

© William Yuan 2000

Email William