DMF Cache Configuration/Optimization

Here is a explanation of some of the dbms server parameters I asked about a
couple of weeks ago. There is considerably more info here than in the I&O guide.
MUCH thanks to the Ingres person who provided most of this info.
note: I use the abbreviation 'cs' to mean the number of connected sessions.


> -dmf.cache_size 10000 (default:128+4*cs)

IMPORTANT:This is essentially the number of data pages ingres cache's.
The default is quite low, so if you want to boost performance and have the
memory setting this high seems like a good idea.

Each cache page may require a lock (some people have said TWO locks).
for each of the servers.  If you have 65000 locks, you can have 8 caches
of 7000 pages, which may use 56000 of the locks, leaving 9000 to run
everything else, including the locks required by the pages in the group
buffers. User locking in a typical system requires several thousand
locks.  A specific system like yours may need more or could get by with
fewer, depending on the applications that are run. 

If the dmf.cache_size parameter is set, it specifies the size of the
cache. All servers connecting to the same cache must have exactly the
same dmf parameters set, including those which have not been specified
(defaults used).  For this reason. it is tricky to get servers with
different connected_sessions values to share a cache. 

When the first shared_cache server starts, it builds the cache and
becomes the manager.  Later servers connect to the existing cache, but
do not add to it.  If the server acting as the manager shuts down
gracefully (set server shut, then all sessions disconnect,) one of the
other servers using the cache will take over the job of managing the
cache.  If any server connected to the shared cache (which is
implemented as a shared memory segment) exits ungracefully (kill -9
, iimonitor stop server) all the servers sharing the cache will go
down because the dying server will take out the shared memory segment. 


> -dmf.count_read_ahead 16 (default:4+cs/4)
> -dmf.size_read_ahead 16  (default:8)

Group buffers will also use one lock per page, or 16*16 locks per cache,
for a total of 2048 locks used, leaving 6952 for everything else.  If
you see a performance benefit from increasing the -dmf.size_read_ahead
to 16, you should work on reducing the number of table scans that are
required in the system. Missing or inappropriate indices or table
structures usually cause this. 

> -dmf.tcb_hash 512 (default:256. Should be multiple of 2)

Does each of the servers access over 1000 tables within a short time?  
If not, this will not return any benefit, and will use memory.

> -opf.active 10 (default: cs/2)
> -opf.memory 4000000 (default:200000)

This will reduce the memory requirement of the optimizer.  You didn't 
say how many connected sessions the servers support, so it is not 
possible to evaluate whether 10 is an optimal value of -opf.active.  
Very few systems need more than 5 per server.  The -opf.memory value is 
providing 400000 bytes per optimizer session, which is twice the 
default.  The default is generally adequate, but if your system runs 
queries that typically return an OPF-out-of-memory error, then the 
increase is justified.

> -quantum 50 (default: 500)

This will cause the server to force a CPU-bound thread to interrupt 
after II_QSWITCH milliseconds.  If II_QSWITCH is not set, it defaults to 
500 milliseconds.  I would not recommend setting II_QSWITCH below 150 on 
an SS1000.


Thanks again to everyone who helped,


		Scott Kelley
		"skelley@hscw254.es.hac.com"
		(310) 364 7519
Ingres Q & A
To William's Home Page

© William Yuan 2000

Email William