Ingres performance tuning

From: Roy Hann 

On Tue, 16 Apr 1996, Steven L Jenkins wrote:

> I'm trying to improve the performance on an INGRES system here and have
> given the machine it's on another 40M of RAM.  I don't know much about
> the installation of Ingres, although I've read through the installation
> guide, so I'm trying to figure out which parameters to tweak to give
> the best performance.  There appear to be several that I can change,
> but if I want to minimize I/O, what will the most significant parameters
> be? (for example, I'd like to put all 40M in the 'buffer cache'..but I'm
> not sure how to specify that).

Sounds like you want to increase the size of at least the DMF cache.  
Fair enough, you can do that, just don't expect any miracles from server 
tuning though.  If I had just one day to spend improving the performance 
of a system, I wouldn't look at server parameters till late in the day, 
after I'd looked at the database design, indexing, query plans, and 
stats.  A good query plan can be the difference between a query that 
takes 10 seconds and one that takes 10 minutes, or one that executes 
within an hour and one that takes a week.  Server tuning is like 
polishing the chrome--in most cases.  It will earn you only a small 
improvement unless it is REALLY screwed up.

See the I & O Guide (and the System Reference Guide if this is an OI
system) for instructions on tinkering with the DMF parameters.  Depending
on the pattern of use on your system you may want to adjust the size and
number of group read buffers while you are doing that. 

It is impossible to direct you to change certain parameters in certain
ways.  Every system is different, and a change for the better in one area
can mean a change for the worse in another.  Where the optimum balance
lies is your call.  Also, there is no algorithm for tuning.  It is a
process of stepwise refinement.  Collect some data to get a baseline for
the system performance.  Then adjust JUST ONE parameter and collect some
more data.  Reverse the change and change JUST ONE other parameter, and go
on doing that.  Once you know what made an improvement (by whatever
standard you are measuring "improvement") make the most promising changes
in pairs, measuring and reversing each trial.  Stop when you decide 
you've spent more time twiddling the knobs than you will ever earn back 
through improved performance.

There have been several papers on server tuning presented at Ingres 
World; you might like to contact the NAIUA (info@naiua.org) to enquire 
about buying the proceedings of past conferences.

But as I say, the benefits of server tuning are often grossly overrated.  
There are plenty of other things to look at first.

========================================================================
Roy Hann                                   rhann@tnc.com
BKB Engineering Ltd.,                      NAIUA Member-at-Large
11211-76th Avenue,                 
Edmonton, Alberta                          http://www.tnc.com/bkb
T6G 0K2                                    Tel: (403)497-2409
Canada                                     FAX: (403)436-6055
========================================================================





Paul Sugden wrote:
> 
> 
> 1) Disk Balancing
> 
> CA have recomended that we should perform some disk balancing on each
> of our production databases in order to gain some performance.  What
> we are currently doing is picking out our large tables, and moving
> each of the secondary indexes onto different tasks -
> i.e. Location_1 = Base table, Location_2 = Secondary1, Location_3 =
> Secondary_2, Location_4 = Secondary_3 etc....
> Does anyone know if this will in fact improve performance, and what
> else might we do to better balance across disks?

I would expect balancing the I/O load to help, yes.  Another thing you can try
is striping one or more tables across locations (assuming that a location
is a physical disk and not a striped array of some sort).


> 3) Empty/Old tables
> 
> We have identified a number of potential unused tables within the
> system.  What we have planned to do is to monitor the activity on
> these tables over a period of 1-2 months.  We can monitor updates,
> inserts and deletes by using Rules which we have implemented.  But we
> can't think of a way of monitoring Selects other than by actually
> putting tracing into the application code... does anyone know how to
> monitor SELECTS at a server level?

Well, I suppose you could use printqry via II_EMBED_SET and scan the output.
That may or may not be practical.  I think I would be inclined to watch
the access dates on the files underlying the tables;  a file that has only
been accessed by your nightly checkpoint is probably "dead".  As a last
check you can try renaming the underlying file and wait for a complaint.
That way if it turns out someone needs the table you can quickly put it back.


> 4)Persistance on Secondary Indexes
> 
> As part of a modify strategy we are implementing, we will be putting
> persistance on all of our secondary indexes.  Persistance tells
> ingres to recreate secondary indexes whenever the base table is
> modified.  Does anyone know of any drawback to using persistance?

The main drawback is that it recreates the index any time the base
table is modified.  :)  Sometimes you want the base structure without the
indexes, such as when bulk-loading.  You'll have to decide which way is
more convenient for you.  I don't know of any other drawback.

Karl





Hi Chris,
> 
> Dear All,
> 
> We are currently experiencing performance problems on our DB servers.
> Our setup is as follows:
> 2 off Alpha Server 1200 5/533 (single CPU) 1GB memory forming a simple
> cluster
> Digital UNIX 4.0d
> OpenIngres 2.0/9712 (running on the node currently providing the cluster
> service)
> Ingres transaction log file, data and indexes each loacted on seperate
> disk in a RAID array
> 
> The following ingres processes (with their memory usage) are running on
> the server:
> iigcc               7MB
> iigcn               4MB
> dmfacp            155MB
> iidbms dbms       343MB
> iidbms recovery   161MB
> 
> There are in general about 200 connections to the DB at all times.
> 
> If an extra processor was installed , would Ingres be able to make use
> of it ie. another DBMS process ?

    Yes. Also useful for System related activity eg swapping, paging, i/o

> 
> If we determine that we are disk bound, would an extra DBMS process (on
> the single processor) help ? Is this even possible to configure ?

    I cant see how an extra server would help if you were disk i/o bound.
    More slaves may help but seeing as how this is OI2.0 I'm not real sure
    about that.

> 
> Does anyone know of any SW upgrade requirements if we upgrade to a 6th
> generation processor ?

    Ask DEC and CA.
> 
> Is it possible to configure a read only database - ie configure the
> second server as another database server that accesses the same disks,
> but for reading only ?

    Umm, I'm not quite sure what you mean here. 

    If its a simple read only database then thats easy. You would do that by 
    the issuing of select only grants to the tables in the database.  But then 
    the database is readable on any server that allows access to it.

    If on the other hand you want a database to be read only in one server and
    updateable in another then thats not possible.

> 
> If we were to install a lot more memory, is it possible to load all DB
> (or specific tables) into memory (DB size including indexes approx 2GB)
> ?

    No. (Well possibly not!)

    In fact the limit is/was given by the DMF Cache. Under 6.4 the size of the
    cache was limited to 64000 pages. I'm not sure about any limit in OI.
    However, forevery page in the cache you must have a lock available so
    there is no point configuring 64000+ pages if you dont configure your
    locking system with a similar number.

    The DMF Cache will hold whatever is read in to it until there is some need
    to get rid of it. Hence if a table is heavily used, it will most likely be
    in the cache anyway. Whether or not other people can use it is subject to
    locking and thats outside the caches area of responsibility.

    I wouldn't go and expand the cache size until you had done some cache
    analysis with trace point dm420 first. ie HITS/FIX ratio. However, having
    a similar sized database and with a simlar number of users on my DEC Box
    I'd be guessing you could handle a 30000 page cache. 

    The extra memory could really help in other areas though. Specifically
    thinking here of any swapping required by the OS in handling the 200 users
    you mention. Have you any idea how much memory your users require to
    connect to the applications. As an example I know that generally each ABF
    application my users uses is approx 10M memory required. If this
    translates to your exampl then you would require 2G just for the users
    alone to minimize swapping.

    As a seperate point. The default settings of wbend and wbstart are way to
    high. 
> 
> We are also investigating the SQL, but the identification of poor
> queries is proving difficult. To aid this we are considering purchasing
> iWatch and iAnalyst from Quest, has anyone had any experience of these
> tools, what are your views ?

    I havent used these myself. I've had a lot of success just by using a
    fairly simple 'Monte Carlo' approach of checking the serveres every 5
    minutes. Then focusing investigation on the queries that are most commonly
    captured.

    The errlog is a great benefit as well. Lock Timeouts, Deadlocks and even
    simple escalations may be very usefull in tracking down inappropriate
    locking strategies/table structure or even inappropriate management of
    large system intensive queries.

    Check for people waiting on locks. That process can be automated so that
    you dont need to go wading through IPM.

> 
> Thanks in advance.
> 
> Chris
> 
    Sounds to me like you aren't yet sure exactly what (or waht combination) of
    events is causing your performance problem. As a general rule I'd focus on
    the applications and tables structures more than server configuration and
    hardware. I've always found that application tuning can yield literally 
    upto 100 fold improvements for very little effort.

    Martin Bowes

Random Duckman Quote #25:
Cornfed - It was good to have a real client again. Not the pretend kind you
          build out of Leggo.




The Legend  wrote in article
<7hd5hd$20d$1@news.spacelab.net>...
> Hello all,
> 
>    I have a couple SQL questions:
> 
>   (1)  Does running a query against a view table perform better than a
> query against the same tables that the view table is created from?

The view definition (within the Relation Descriptor Facility) is reconciled
early in the query parsing process and basically performs the same as if 
against the base table *unless* there is something like an aggregate 
function embedded in the view definition.

> 
>   (2) Wouldn't it run better if I set readlock=nolock before any select
> query?

This affects concurrency, not performance.

> 
>  (3) Is there any increase in performance if I have multiple DBMS servers
> running on a single CPU machine?

It depends.  I have used multiple DBMS servers on a single processor
machine when I wanted to segregate users (by class, such as online
vs. batch; by DB; by user class, etc.).  This is more a trail and error
(er, I mean benchmarking) effort based on your specific environment.

> 
>     Your comment is truly appreciated.
> 
>     Regards,
> 
>     Benny P.

-- 
Chip Nickolett              ChipN@Comp-Soln.com
Comprehensive Consulting Solutions, Inc.  (www.Comp-Soln.com)
Phone:  414-544-9954  Fax: 414-544-1236
Ingres Q & A
To William's Home Page

© William Yuan 2000

Email William