At 1:22 PM +0000 5/22/99, smailam@aramco.com.sa wrote: >What is the best way to measure the system resource that is consumed by >my routines or SQL statements. I don't know if there is a "best" way. It depends on what you want, and how much you are willing to impact the system. To measure specific query response for the purposes of query tuning, using dbmsinfo with _et_sec, _cpu_ms, _dio_cnt, and maybe _bio_cnt is usually the best. You don't *need* an idle system but you'll get more predictable elapsed times with an idle machine. Don't forget to flush caches with set trace point dm421 before a measurement. To measure real life, under-load numbers, analysis of an iiprtqry.log file is a good way. Or you can embed permanent calls to a routine that does dbmsinfo deltas in your application code. The latter will let you write the data to a table or file, but it will likely impact overall performance more. You can also look into using IMA. I've used IMA to measure overall system activity, but I haven't tried to use it to watch specific sessions. I don't think you could use IMA to target individual queries. > [snip] >I want to log the consumption of the system resource for each routine >or SQL statement into a table so that I can monitor the performance of my >system as the time goes. In that case, permanently installing suitably chosen calls to dbmsinfo sounds like the best idea to me. You could wrap it all up in a routine and use some kind of environment variable or global variable to turn it off when you don't want the overhead. Karl R. Schendel, Jr. K/B Computer Associates schendel@kbcomputer.com Ingres and Unix Expertise President, North American Ingres Users Association president@naiua.org From the client side you can use dbmsinfo and printqry to calculate query resource utilization but, as you and Karl point out, this requires a manual effort or a lot of coding. If your goal is to find and tune the most resource-intensive queries, it is easier to monitor them from the server side. Using DMT's DBAnalyzer Collector, you can take a snapshot of the system at regular intervals, load the data into a database, and identify long-running or resource-intensive queries using standard SQL. Monitoring from the server side allows you to analyze all sessions simultaneously, giving you the total system impact of a query (e.g., total CPU utilization of all sessions executing this query in a given period). The disadvantage of sampling is that you will not see ALL the queries executed by the application. But depending on the sample interval, there is a high probability of uncovering resource-intensive queries since these queries have a higher frequency and/or duration. We've used this method successfully at many customer sites to identify the queries that need tuning. Best regards, Alan Sargent DMT, Inc. www.dmt.com Harzewski, Erica STPwrote: > Hi, > We are running Ingres 6.4/06 with patch 5527 on Solaris 2.6. > One of our teams has a C program that does a repeated query. It initially > starts out fine and then slows down to nearly a halt once it gets through a > few iterations. I've included one iteration of the repeated query, as well > as the table structures. Are there trace points that we can set in the C > code to see what's happening? I've run a qep on this one iteration and it > looks ok but I want to see what happens after a number of iterations. Should > we consider modifying the tables to different structures? or use temporary > tables instead unions? There's probably something glaring that I'm missing > here. > TIA, > > Erica Harzewski > Database Administrator > Guidant Corp. > St. Paul, MN From: Chip Nickolett Subject: Re: repeated query performance problems Hello Erica, You should look at trace points QS501 - QS505. If the QSF pool is too small then repeated queries will perform worse than non-repeated queries due to flushing objects in the QSF pool. Also, for clarification... Does the query just slow down or does it actually fail (as stated just prior to the example). Chip Nickolett ChipN@Comp-Soln.com Comprehensive Consulting Solutions, Inc. (www.Comp-Soln.com) Phone: 414-544-9954 Fax: 414-544-1236 President - North American Ingres Users Association (NAIUA) From: Roy Hann Subject: Re: repeated query performance problems >Thanks to everyone for their suggestions on my problem with the repeated >query that would grind to a near halt during execution. The application >developer actually figured this one out (and I'm kicking myself for not >seeing it sooner). > >Originally the query had: > ...AND occ.occ_date_time >= '01-jun-1989' > AND occ.occ_date_time <= 'today' > etc. >All he did was add the date function: > AND occ.occ_date_time >= date('01-jun-1989') > AND occ.occ_date_time <= date('today') > >It now runs along merrily. If this fixed the problem then it was exactly what I suggested: the arguments supplied to the first instance of the query caused the optimizer to select a plan that would be bad in general, but not for the first instance. Because you are now using a function, the optimizer has ignored the actual dates and has used its default heuristics and come up with a one-size-fits-all plan that works well in general. Is this the explanation you had in mind that makes you kick yourself? It's a pretty subtle point, so hats off to you! But I still think I would just ditch the REPEATED modifier. IMO it is just a waste of effort 9 times out of 10, and once in a while it is a booby-trap. It is only very rarely worth using. Roy Hann BKB Client/Server (UK) Ltd.
© William Yuan 2000
Email William