<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: Performance Issue? in Operating System - HP-UX</title>
    <link>https://community.hpe.com/t5/operating-system-hp-ux/performance-issue/m-p/3798723#M266191</link>
    <description>It looks like a database load issue. It is a good idea for DBA to run perfstat and see what kind of execution plan improvements might be done for top I/O statements. Sometimes adding just one index might solve a problem.</description>
    <pubDate>Fri, 02 Jun 2006 11:48:51 GMT</pubDate>
    <dc:creator>Alexey_12</dc:creator>
    <dc:date>2006-06-02T11:48:51Z</dc:date>
    <item>
      <title>Performance Issue?</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/performance-issue/m-p/3798702#M266170</link>
      <description>Having a bit of a performance issue with a system.&lt;BR /&gt;&lt;BR /&gt;System is a RP4440 with 12GB of ram a 2 dual core 1GB cpus running 11iv2.&lt;BR /&gt;&lt;BR /&gt;Kernel has been tuned for Oracle 10g as per Oracle specs.&lt;BR /&gt;&lt;BR /&gt;From the dba's:&lt;BR /&gt;&lt;BR /&gt;"The average read and write i/o response was around 1.5 msec or less, with most datafiles getting under 1 msec response. I did not see any spikes anywhere near 1 sec, a few up to 5 msec at most. &lt;BR /&gt;Some of the high i/o service time numbers in Oracle 10g Grid control are for root disks and not database disks."&lt;BR /&gt;&lt;BR /&gt;And yet - the disks spike at 100%!!!&lt;BR /&gt;&lt;BR /&gt;Background on disks:&lt;BR /&gt;&lt;BR /&gt;The "disk" that the OS sees (/dev/dsk/c4t10d1 for example) is not really a disk - it is a LUN. The LUN is a 32 GB meta made up of 8MB chunks of several disks. So, even if I were to "present" say a 2 GB LUN (disk to the OS), it will for all intensive purposes be on the same "physical" disks as the rest. The frames are RAID 10 - that is striped and mirrored across as many disks as possible.&lt;BR /&gt;&lt;BR /&gt;On another system, the i/o response times were higher then this server 5 - 10 msec range - and yet that system doesn't spike at 100% ever on the disks.&lt;BR /&gt;&lt;BR /&gt;What I think we have is an application that is constantly hittng the database - and not doing it very elegantly (IE expensive sql).&lt;BR /&gt;&lt;BR /&gt;I attached a lengthy text file with sar/glance data...&lt;BR /&gt;&lt;BR /&gt;Looking for comments...&lt;BR /&gt;&lt;BR /&gt;Thanks...Geoff&lt;BR /&gt;</description>
      <pubDate>Thu, 01 Jun 2006 11:48:27 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/performance-issue/m-p/3798702#M266170</guid>
      <dc:creator>Geoff Wild</dc:creator>
      <dc:date>2006-06-01T11:48:27Z</dc:date>
    </item>
    <item>
      <title>Re: Performance Issue?</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/performance-issue/m-p/3798703#M266171</link>
      <description>Did you try to track the process that uses the disks? It can be done in glance.&lt;BR /&gt;&lt;BR /&gt;If it' oracle's, than your DBA can analyse Oracle with their tool to see what session is "killing" the disks.&lt;BR /&gt;&lt;BR /&gt;Alex.</description>
      <pubDate>Thu, 01 Jun 2006 11:58:00 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/performance-issue/m-p/3798703#M266171</guid>
      <dc:creator>Alex Lavrov.</dc:creator>
      <dc:date>2006-06-01T11:58:00Z</dc:date>
    </item>
    <item>
      <title>Re: Performance Issue?</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/performance-issue/m-p/3798704#M266172</link>
      <description>&lt;!--!*#--&gt;Here's some of the processess:&lt;BR /&gt;&lt;BR /&gt;B3692A GlancePlus C.03.86.00    11:00:51   svr104 9000/800                                           Current  Avg  High&lt;BR /&gt;-----------------------------------------------------------------------------------------------------------------------&lt;BR /&gt;CPU  Util   S     SN                 NU               U                                               | 48%   66%   84%&lt;BR /&gt;Disk Util   F                                                                                       F |100%  100%  100%&lt;BR /&gt;Mem  Util   S           SU                                            UB    B                         | 73%   73%   74%&lt;BR /&gt;Networkil   U                UR                           R                                           | 53%   53%   53%&lt;BR /&gt;-----------------------------------------------------------------------------------------------------------------------&lt;BR /&gt;                                                     PROCESS LIST                                          Users=    4&lt;BR /&gt;                              User      CPU Util   Cum     Disk             Thd&lt;BR /&gt;Process Name  PID   PPID  Pri Name    ( 400% max)  CPU    IO Rate    RSS    Cnt&lt;BR /&gt;--------------------------------------------------------------------------------&lt;BR /&gt;gzip          3825   3406 245 oracle   86.9/72.1    35.9  166/ 137   644kb    1&lt;BR /&gt;oracleipris  19200      1 154 oracle   12.2/ 2.0    72.1 72.0/ 6.0  13.9mb    1&lt;BR /&gt;oracleipris  26732      1 148 oracle   10.2/ 0.8   851.9 10.4/ 2.5  24.5mb    1&lt;BR /&gt;oracleipris  24218      1 200 oracle   10.2/ 0.4   555.8  0.0/ 1.8  15.4mb    1&lt;BR /&gt;oracleipris   4573      1 154 oracle    8.2/ 0.3   235.9  0.0/ 0.8  19.5mb    1&lt;BR /&gt;oracleipris  25217      1 154 oracle    8.2/ 0.4   186.9 46.3/ 1.1  13.8mb    1&lt;BR /&gt;oracleipris  24594      1 148 oracle    4.9/ 1.1  1353.1 66.8/ 3.1  17.7mb    1&lt;BR /&gt;oracleipris  18426      1 154 oracle    4.7/ 1.1    40.3 57.0/ 4.7  16.9mb    1&lt;BR /&gt;oracleipris   6616      1 148 oracle    4.5/ 0.6   812.2 39.0/ 1.5  15.5mb    1&lt;BR /&gt;oracleipris   3380      1 154 oracle    3.1/ 3.7     2.4  0.0/ 0.7  10.5mb    1&lt;BR /&gt;oracleipris  19206      1 148 oracle    2.9/ 1.7    60.6 87.5/ 3.6  13.7mb    1&lt;BR /&gt;oracleipris   8205      1 154 oracle    2.2/ 1.0   793.4  2.5/ 2.4  15.5mb    1&lt;BR /&gt;oracleipris  28225      1 154 oracle    2.0/ 1.4  1666.8  0.9/ 3.1  18.0mb    1&lt;BR /&gt;emagent       2805   2795 154 oracle    2.0/ 0.5   480.0  0.9/ 0.1  59.7mb    6&lt;BR /&gt;oracleipris   2108      1 154 oracle    1.6/ 0.9   875.3  3.8/ 2.3  17.0mb    1&lt;BR /&gt;oracleipris  28355      1 154 oracle    1.6/ 1.3  1609.0  0.0/ 3.3  18.5mb    1&lt;BR /&gt;oracleipris  16548      1 154 oracle    1.6/ 1.1  1168.8  9.3/ 3.3  17.2mb    1&lt;BR /&gt;nmupm         3872   2805 179 oracle    1.3/ 1.3     0.1  0.4/ 0.4  51.5mb    1&lt;BR /&gt;oracleipris  21022      1 148 oracle    1.3/ 0.6   431.7 88.8/ 2.5  16.2mb    1&lt;BR /&gt;oracleipris  24395      1 154 oracle    1.1/ 0.6   743.6  5.9/ 1.4  18.5mb    1&lt;BR /&gt;nmupm         3874   2805 179 oracle    1.1/ 1.1     0.1  0.0/ 0.0  51.5mb    1&lt;BR /&gt;nmupm         3875   3874 179 oracle    1.1/ 1.1     0.1  0.0/ 0.0   8.0mb    1&lt;BR /&gt;midaemon      1529      1 -16 root      1.1/ 1.7  2393.1  0.0/ 0.0  22.8mb    2&lt;BR /&gt;oracleipris  13479      1 154 oracle    1.1/ 0.9   854.6  1.1/ 2.9  19.9mb    1&lt;BR /&gt;oracleipris  15244      1 148 oracle    0.9/ 0.1   111.3  102/ 0.5  18.3mb    1&lt;BR /&gt;oracleipris    775      1 154 oracle    0.9/ 0.3   300.4  1.3/ 1.1  18.1mb    1&lt;BR /&gt;ora_lgwr_ip   6003      1 156 oracle    0.4/ 1.1  1448.9  129/ 160  19.9mb    1&lt;BR /&gt;dbsnmp        5800   5797 154 oracle    0.4/ 0.3   422.6  0.0/ 0.0  21.4mb   16&lt;BR /&gt;oracleipris  25578      1 154 oracle    0.4/ 0.3   270.6  2.7/ 1.0  17.1mb    1&lt;BR /&gt;oracleipris  17471      1 154 oracle    0.2/ 0.4   192.5  7.2/ 2.0  16.2mb    1&lt;BR /&gt;ora_dbw3_ip   5993      1 156 oracle    0.2/ 0.1   183.5 70.4/21.5  11.9mb    1&lt;BR /&gt;ora_dbw1_ip   5989      1 156 oracle    0.2/ 0.1   183.6 48.1/21.5  11.9mb    1&lt;BR /&gt;oracleipris  25223      1 154 oracle    0.2/ 0.3   180.8  1.1/ 1.3  14.0mb    1&lt;BR /&gt;oracleipris  13994      1 154 oracle    0.2/ 0.3   172.2  1.8/ 1.1  14.3mb    1&lt;BR /&gt;ora_dbw4_ip   5995      1 156 oracle    0.2/ 0.1   183.1 44.0/21.5  11.9mb    1&lt;BR /&gt;                                                                                        The gzip is the archive logs - lasts about 45 seconds.  Even after done disks are still hammered.&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;Rgds...Geoff</description>
      <pubDate>Thu, 01 Jun 2006 12:02:05 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/performance-issue/m-p/3798704#M266172</guid>
      <dc:creator>Geoff Wild</dc:creator>
      <dc:date>2006-06-01T12:02:05Z</dc:date>
    </item>
    <item>
      <title>Re: Performance Issue?</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/performance-issue/m-p/3798705#M266173</link>
      <description>Well Geoff, first...I'm no expert but we run a variety of Oracle versions, and if there is one thing I do know it's that the Oracle tuning document is just plain wrong on a few parms.  But parms isn't what you say is your problem...disk I/O is what you mention.&lt;BR /&gt;So I might look at the basics, cause everything is just basics in the end.&lt;BR /&gt;&lt;BR /&gt;Oracle gzip for the archives...how often does this happen ?  Based on how they set things, the DBA's should be able to set this so it doesn't become a 'too often' process.&lt;BR /&gt;Next I'd look at the disks and what is writing on them....do you have lvols on the same disk that might cause contention problems.  Here the storage vendor said you don't have to map it out...the array can handle the load.  Wrong.  I/O performance issues all over the place till we manually moved (pvmove) things around to clear up the I/O contentions.  Like I said just doing the basics.&lt;BR /&gt;I'd especially look at where Oracle is writing their logfiles...for everything.  I had one DBA write everything to one disk (no longer with us, thankfully)-and didn't see why I had an issue with it.&lt;BR /&gt;&lt;BR /&gt;On the application thing...that is something you should have the DBA's looking into using some Oracle tools to tune lousy querries.  Developers don't always really test the code, put it in production and wonder what happens when bad code tries to do lookups on big tables.  Too many DBA's don't like to do this.  &lt;BR /&gt;&lt;BR /&gt;Hope I didn't ramble too much,&lt;BR /&gt;Rgrds,&lt;BR /&gt;Rita&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Thu, 01 Jun 2006 14:26:43 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/performance-issue/m-p/3798705#M266173</guid>
      <dc:creator>Rita C Workman</dc:creator>
      <dc:date>2006-06-01T14:26:43Z</dc:date>
    </item>
    <item>
      <title>Re: Performance Issue?</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/performance-issue/m-p/3798706#M266174</link>
      <description>Good news!  You don't need to tune your system disks architecture!  :-) (sarcasmistic funny) :-) (new word for free too) :-)&lt;BR /&gt;&lt;BR /&gt;It's then obvious that your disk I/O isnt taking up lots of time per call, you're just doing lots and lots of I/O.&lt;BR /&gt;&lt;BR /&gt;Well, there's 3 things you can do, in the following order of cost:&lt;BR /&gt;&lt;BR /&gt;1) Tune the queries to run better.  This is the most obvious, but in deference to the leafy hat you're wearing - I'm going to assume that this is done already.&lt;BR /&gt;&lt;BR /&gt;2) Quit using disk bandwidth and use more memory bandwidth (which is muuuuch faster) by increasing the size of the buffer_cache.  Keep in mind that if your current hit ratio is now at 96%, *then the remaining 4% represents practically ALL of your I/O*.  So, if you can move the hit ratio 1% better, you've reduced total I/O by 25% (roughly)!.&lt;BR /&gt;This is a case, not of any percentages, or waits being "wrong", but just a matter of how much I/O you're trying to push.  If you can push less I/O you'll perform much better.&lt;BR /&gt;A note about this.  You'll find folks here who'd be opposed to this approach (as I used to be), but it's NOT TRUE that the system will actually run slower managing a huge cache b/c you increased the buffer cache.  Managing these areas is what Oracle does quite well, and you're not going to hurt it by doing so.  It does increase the amount of time to start the database and to a lesser degree stop it, but that's about all you could measure in a throughput test.&lt;BR /&gt;&lt;BR /&gt;2a) I'm adding this in here, because I've seen measurable differences in performance due to the following:  multiblock_read_count.&lt;BR /&gt;Change that from 8(default in 9i, don't know about 10g) to 16 and see if your measured system disk I/O from the Unix system (glance or perfview) drops.  It might.  It will throw you off a bit however, because the cost optimizer will be biased more often to do full table scans (or full index scans) instead of index range scans (or similar).  This looks at first glance to be less efficient (because the estimated costs of the large queries you're running are higher).  But you may notice (and you may not) that total system I/O is less (ours was).  The parameter IS DYNAMIC, so if you don't like the way the day is running with the new setting, you can flip it back down to the old level quickly, with no harm done.&lt;BR /&gt;&lt;BR /&gt;3) You can put more cache in your storage server, and if that doesn't work, you can buy faster storage systems/servers.  This is almost always effective (unles you're "there" already). &lt;BR /&gt;&lt;BR /&gt;My guess that you've already done suggestion 1), but you've not ventured into 2).  Try it, it cam be a  pretty inexpensive (all things considered) fix to an I/O problem. Like I said though, provided that suggestion 1 is done already, and there is room for improvement in the cache hit ratio, even a few percent.</description>
      <pubDate>Thu, 01 Jun 2006 15:09:14 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/performance-issue/m-p/3798706#M266174</guid>
      <dc:creator>TwoProc</dc:creator>
      <dc:date>2006-06-01T15:09:14Z</dc:date>
    </item>
    <item>
      <title>Re: Performance Issue?</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/performance-issue/m-p/3798707#M266175</link>
      <description>Have the DBAs check for longops and see if&lt;BR /&gt;you have frequent full table scans.  If you&lt;BR /&gt;have multiple CPUs running parallel table&lt;BR /&gt;scans on tables which are not cached&lt;BR /&gt;you will get high read I/O.&lt;BR /&gt;&lt;BR /&gt;Check the frequency of archive log switches.&lt;BR /&gt;If this is the case you would see high &lt;BR /&gt;write I/O.&lt;BR /&gt;&lt;BR /&gt;The DBAs should run the database monitor&lt;BR /&gt;for a while and check cache hits.  If the  buffer cache is too small you can get&lt;BR /&gt;frequent re-reads on data which should&lt;BR /&gt;be kept in memory.  Aim for 98% cache hits.&lt;BR /&gt;&lt;BR /&gt;The DBAs can also check the v@sql table to&lt;BR /&gt;see if there are any nasty statements being&lt;BR /&gt;run.  These include flushing the sga, &lt;BR /&gt;updates without where clauses on large&lt;BR /&gt;tables, creating large tables with select&lt;BR /&gt;clauses.&lt;BR /&gt;&lt;BR /&gt;The read to write ratio on the LUN may &lt;BR /&gt;give a hint as to the type of problem&lt;BR /&gt;which is occuring.&lt;BR /&gt;&lt;BR /&gt; &lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Thu, 01 Jun 2006 15:19:00 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/performance-issue/m-p/3798707#M266175</guid>
      <dc:creator>Bill Thorsteinson</dc:creator>
      <dc:date>2006-06-01T15:19:00Z</dc:date>
    </item>
    <item>
      <title>Re: Performance Issue?</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/performance-issue/m-p/3798708#M266176</link>
      <description>Very interesting comments so far - thanks.&lt;BR /&gt;&lt;BR /&gt;As far as the leafy hat - I'm Unix all the way - so the queries are in the hands of the developers - not me  :)&lt;BR /&gt;&lt;BR /&gt;Buffer cache - you of course mean Oracle's buffer cache as Oracle bypassess the OS's - again - not my job man.  :)&lt;BR /&gt;&lt;BR /&gt;Currently, the OS buffer cache is: 860MB&lt;BR /&gt;&lt;BR /&gt;gzip runs every 5 minutes.&lt;BR /&gt;&lt;BR /&gt;Adding more cache to the frame - not needed - as this is the only system have the issue - and frame stats show the frame/ FA's/ disks are behaving fine (BTW - 2GB SAN).&lt;BR /&gt;&lt;BR /&gt;I'll check the others.&lt;BR /&gt;&lt;BR /&gt;Thanks...Geoff</description>
      <pubDate>Thu, 01 Jun 2006 15:42:57 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/performance-issue/m-p/3798708#M266176</guid>
      <dc:creator>Geoff Wild</dc:creator>
      <dc:date>2006-06-01T15:42:57Z</dc:date>
    </item>
    <item>
      <title>Re: Performance Issue?</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/performance-issue/m-p/3798709#M266177</link>
      <description>From the dba's:&lt;BR /&gt;&lt;BR /&gt;These have all been looked at, are under continual review and/or have been set already. The only one outstanding which we may still do is adjusting the online log sizes to reduce the number of switches. Usually an online log is filling up every 5 minutes or less even during primetime. Each online log is 250 Meg in size, so this gives you an idea of the amount of update activity going on during prime time.&lt;BR /&gt;&lt;BR /&gt;Rgds...Geoff</description>
      <pubDate>Thu, 01 Jun 2006 15:55:53 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/performance-issue/m-p/3798709#M266177</guid>
      <dc:creator>Geoff Wild</dc:creator>
      <dc:date>2006-06-01T15:55:53Z</dc:date>
    </item>
    <item>
      <title>Re: Performance Issue?</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/performance-issue/m-p/3798710#M266178</link>
      <description>I'm pretty certain that switching log files every 5 minutes at peak time isn't hurting you, providing that the alert log is telling  you that you are completing the log switch (start and finish times are indicated in the log file) well within that 5 minute time frame.  However, the "log_buffer" (buffer size for redo logs) may not be large enough, and this is something that we've increased recently.  We went from 52,428,800 to 73,400,320.  I know that when we first went live with the newer, larger modules the values for this were much lower way back when.&lt;BR /&gt;&lt;BR /&gt;Also, are your redo logs interleaved across two different file systems and disk sets?  I've found that in order to keep up with heavy I/O I've had to keep two mount points for redos and one for archive logs.  Odd numbered log files go in one redo log mount point, and even numbered ones go in other.  This way, during the switch from say, redolog1 to redolog2, you'd be reading from redolog1 and writing to the next archive log (which hopefully resides on yet another file system and set of disks) and at the same time writing to the next redolog2 on the second redolog filesystem (and set of disks).&lt;BR /&gt;&lt;BR /&gt;I always have LOTS of wasted space on the two redolog interleaved sets of disks, as my goal is a rather large set of disk mechs striped together across a nice chunk of controllers to get the job done quickly.  The goal here is to ignore "gigs" of space and consider how much hardware is answering the call to making the writes happen quickly.  And, all of this is RAID 0/1 naturally (never, ever R5 for this type of data).&lt;BR /&gt;&lt;BR /&gt;What is the size of the dba_buffer_cache and what is the hit ratio?  What data rates (read/write) are you seeing on the busiest pvs?  And, from your previous post, can we infer that these busiest pv's are supporting the redo log areas?&lt;BR /&gt;&lt;BR /&gt;Oh yeah, these redo log mount points should be mounted as "convosync=direct" - even if your multiblock_read_count is less than 32 in the Oracle database, therefore excluding the use of the system's buffer cache.  Everything else, needs to allow the system to buffer cache if the multiblock_read_count is less than 32(that is NOT to use convosync=direct).  The multiblock_read_count parameter is normally below 32 unless you've got a datawarehouse.  If you've got an XP series, the optimal setting that I've found from heavy performance testing is 16 if your block size 8k.  This because 8kx16=128k is the reported (from HP) max single data transfer size for a storage server of this type, and our testing validated this point for total max processing throughput (by user model types of various types throughout the company).  The wierd datapoint of whether or not to use the direct mount point at a multblock_read_count of less than 32 came from the HP tuning team (actually the sub-team) dedicated to tuning Oracle on the HP platforms, and verified by Oracle itself.  This data point comes out b/c of something that is done differently (that is a code branch) that happens when the multiblock_read_count is less than 32, which requires a set of I/O at the file system level, and not just within the file itself.  Of course, if you're using raw files (and according to your last posting, I don't think you are) this doesn't apply.  Caveat Emptor: this information is gathered from Oracle 9i, not 10g.&lt;BR /&gt;</description>
      <pubDate>Thu, 01 Jun 2006 17:35:55 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/performance-issue/m-p/3798710#M266178</guid>
      <dc:creator>TwoProc</dc:creator>
      <dc:date>2006-06-01T17:35:55Z</dc:date>
    </item>
    <item>
      <title>Re: Performance Issue?</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/performance-issue/m-p/3798711#M266179</link>
      <description>A bit more on redologs...&lt;BR /&gt;&lt;BR /&gt;Re: Log file sizes - we've got them at 1G each, and at peak we hit a switchlog rate of 3 files per minute (and hence we're looking at making them even larger for a little elbow room), but we're not really in any trouble b/c the switches complete within the time windows of the max switch out per minute rate  - 20 seconds (which gives you an idea of the amount of update activity at peak we've managed to address these types of problems for).</description>
      <pubDate>Thu, 01 Jun 2006 17:54:07 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/performance-issue/m-p/3798711#M266179</guid>
      <dc:creator>TwoProc</dc:creator>
      <dc:date>2006-06-01T17:54:07Z</dc:date>
    </item>
    <item>
      <title>Re: Performance Issue?</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/performance-issue/m-p/3798712#M266180</link>
      <description>Geoff,&lt;BR /&gt;&lt;BR /&gt;Is performance really a problem or is it just the 100% for the disk shown in glance ?&lt;BR /&gt;What I observed is that the disk/LUN - c4t10d1 is very busy till around 1:20 and then the disk c4t10d2 is busy . The avserv times are around 17 ms for them when they are busy , but there is no significant avwait of avqueue. So what I feel is that there may be a slight performance issue but not as much as felt due to 100% figure of the disk activity. As you must be definitely aware that glance will report the highest disk activity percent - so even if a particular disk is very busy it will show 100% busy in glance and the next moment some other disk is very busy again you will see 100% busy disk in glance - even though its not the same disk each time.&lt;BR /&gt;I dont think its very bad to have a disk very busy for some times.&lt;BR /&gt;The next thing would be to move some of the most used datafiles from these disks to some other disks/LUNS if you feel that there is perf problem and this will help in requests scattered across disks thus probably keeping each disk lesser busy.&lt;BR /&gt;But as I can see the data seems to be more or less distributed - as per c4 disks being almost equally busy except for those few instances of d1 and d2 disks being more busy.&lt;BR /&gt;&lt;BR /&gt;Regards,&lt;BR /&gt;Ninad</description>
      <pubDate>Fri, 02 Jun 2006 03:50:03 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/performance-issue/m-p/3798712#M266180</guid>
      <dc:creator>Ninad_1</dc:creator>
      <dc:date>2006-06-02T03:50:03Z</dc:date>
    </item>
    <item>
      <title>Re: Performance Issue?</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/performance-issue/m-p/3798713#M266181</link>
      <description>Geoff,&lt;BR /&gt;&lt;BR /&gt;Is the other system at the same RAID level?&lt;BR /&gt;&lt;BR /&gt;What I do, when experiencing the same issue you have, ask the DBA to check what the session is really doing.&lt;BR /&gt;The 100% is mostly caused by a full table scan(query) or the indexes which needs to be re-analyzed(Update statistics).&lt;BR /&gt;&lt;BR /&gt;Goodluck&lt;BR /&gt;&lt;BR /&gt;Darrel</description>
      <pubDate>Fri, 02 Jun 2006 04:40:16 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/performance-issue/m-p/3798713#M266181</guid>
      <dc:creator>Darrel Louis</dc:creator>
      <dc:date>2006-06-02T04:40:16Z</dc:date>
    </item>
    <item>
      <title>Re: Performance Issue?</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/performance-issue/m-p/3798714#M266182</link>
      <description>The performance issue is that occasionally they lose transactions (3 or 4 during peak time).&lt;BR /&gt;&lt;BR /&gt;RAID is same on all our systems.&lt;BR /&gt;&lt;BR /&gt;I tried mounting with convosync=direct and it appeared to make no difference - though not for redo.&lt;BR /&gt;&lt;BR /&gt;It is my understanding, that Oracle 10g bypasses the OS buffer cache anyways.&lt;BR /&gt;&lt;BR /&gt;Thanks all!&lt;BR /&gt;&lt;BR /&gt;Rgds...Geoff&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Fri, 02 Jun 2006 07:54:01 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/performance-issue/m-p/3798714#M266182</guid>
      <dc:creator>Geoff Wild</dc:creator>
      <dc:date>2006-06-02T07:54:01Z</dc:date>
    </item>
    <item>
      <title>Re: Performance Issue?</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/performance-issue/m-p/3798715#M266183</link>
      <description>Geoff, IMHO - I wouldn't call the loss of 3 or 4 transactions (depending on how they are lost I guess, that is, error codes) at peak periods a performance issue.  Sounds like a bug in Oracle, or possibly a resource constriant (in Oracle or in HPUX) to me.  What's the error code that Oracle provides when the failure occurs.</description>
      <pubDate>Fri, 02 Jun 2006 09:20:48 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/performance-issue/m-p/3798715#M266183</guid>
      <dc:creator>TwoProc</dc:creator>
      <dc:date>2006-06-02T09:20:48Z</dc:date>
    </item>
    <item>
      <title>Re: Performance Issue?</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/performance-issue/m-p/3798716#M266184</link>
      <description>Greetings Geoff!&lt;BR /&gt;&lt;BR /&gt;" I tried mounting with convosync=direct and it appeared to make no difference - though not for redo.&lt;BR /&gt;&lt;BR /&gt;It is my understanding, that Oracle 10g bypasses the OS buffer cache anyways.  "&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;Geoff, the rule of thumb for cooked filesystems even for Oracle 10G is *still* to use forced directIO on filesystem mounts. So that means:&lt;BR /&gt;&lt;BR /&gt;mincache=direct,delaylog,convosync=direct&lt;BR /&gt;&lt;BR /&gt;and with your buffer cache between 800MB to 1.6GB (depending on other non-Oracle activity).&lt;BR /&gt;&lt;BR /&gt;But of course RAW is always better...&lt;BR /&gt;&lt;BR /&gt;Messr Joubert (Mon Favourite Oracle ITRC Person), Geoff pardon for the hijack...:&lt;BR /&gt;&lt;BR /&gt;We actually have somewaht of a similar issue on a large cooked environment (9i). The system is hooked up to a massive XP12000, all best prctices, forcedDirectIO and my applications are always complaining of slow response and throughput questions. Yet I see Oracle is not able to exact more storage performance and the XP12000 is dying to give more of what it can give. SAN and FC channels are healthy. I do notice "db_file_multiblock_read_count" is simply at 8. And my DBAs are "afraid" to tweak it upwards due to possible "repercussions" which they would not explain anyway.  What do you think? Is it dangerous to increase this parameter and what will be the consequences?&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Fri, 02 Jun 2006 09:59:03 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/performance-issue/m-p/3798716#M266184</guid>
      <dc:creator>Alzhy</dc:creator>
      <dc:date>2006-06-02T09:59:03Z</dc:date>
    </item>
    <item>
      <title>Re: Performance Issue?</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/performance-issue/m-p/3798717#M266185</link>
      <description>As far as convosync=direct,mincache=direct,goes, we had to remove that from my QA SG cluster as we saw this in the startup log after patching our system:&lt;BR /&gt;&lt;BR /&gt;vxfs mount: option not supported on this version of vxfs.&lt;BR /&gt;&lt;BR /&gt;Running 11.11.&lt;BR /&gt;&lt;BR /&gt;# fstyp -v /dev/vg50/sapdata12&lt;BR /&gt;vxfs&lt;BR /&gt;version: 4&lt;BR /&gt;f_bsize: 8192&lt;BR /&gt;f_frsize: 8192&lt;BR /&gt;f_blocks: 17670144&lt;BR /&gt;f_bfree: 43635&lt;BR /&gt;f_bavail: 43551&lt;BR /&gt;f_files: 2976&lt;BR /&gt;f_ffree: 10880&lt;BR /&gt;f_favail: 10880&lt;BR /&gt;f_fsid: 1077018636&lt;BR /&gt;f_basetype: vxfs&lt;BR /&gt;f_namemax: 254&lt;BR /&gt;f_magic: a501fcf5&lt;BR /&gt;f_featurebits: 0&lt;BR /&gt;f_flag: 16&lt;BR /&gt;f_fsindex: 7&lt;BR /&gt;f_size: 17670144&lt;BR /&gt;&lt;BR /&gt;Version 4 is highest on 11.11 - so I'm not too sure what happened there...</description>
      <pubDate>Fri, 02 Jun 2006 10:17:11 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/performance-issue/m-p/3798717#M266185</guid>
      <dc:creator>Geoff Wild</dc:creator>
      <dc:date>2006-06-02T10:17:11Z</dc:date>
    </item>
    <item>
      <title>Re: Performance Issue?</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/performance-issue/m-p/3798718#M266186</link>
      <description>VxFS version 4.0 layout should be able to allow your Forced DirectIO mounts. What was the mount directive used?</description>
      <pubDate>Fri, 02 Jun 2006 10:21:21 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/performance-issue/m-p/3798718#M266186</guid>
      <dc:creator>Alzhy</dc:creator>
      <dc:date>2006-06-02T10:21:21Z</dc:date>
    </item>
    <item>
      <title>Re: Performance Issue?</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/performance-issue/m-p/3798719#M266187</link>
      <description>Here's a test - I created a 512MB filesystem:&lt;BR /&gt;&lt;BR /&gt;mount -o mincache=direct /dev/vg01/lvol4 /zmnt&lt;BR /&gt;vxfs mount: option not supported on this version of vxfs.&lt;BR /&gt;&lt;BR /&gt;# fstyp -v /dev/vg01/lvol4&lt;BR /&gt;vxfs&lt;BR /&gt;version: 4&lt;BR /&gt;f_bsize: 8192&lt;BR /&gt;f_frsize: 1024&lt;BR /&gt;f_blocks: 524288&lt;BR /&gt;f_bfree: 523059&lt;BR /&gt;f_bavail: 490368&lt;BR /&gt;f_files: 130796&lt;BR /&gt;f_ffree: 130764&lt;BR /&gt;f_favail: 130764&lt;BR /&gt;f_fsid: 1073807364&lt;BR /&gt;f_basetype: vxfs&lt;BR /&gt;f_namemax: 254&lt;BR /&gt;f_magic: a501fcf5&lt;BR /&gt;f_featurebits: 0&lt;BR /&gt;f_flag: 0&lt;BR /&gt;f_fsindex: 7&lt;BR /&gt;f_size: 524288&lt;BR /&gt;</description>
      <pubDate>Fri, 02 Jun 2006 10:22:37 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/performance-issue/m-p/3798719#M266187</guid>
      <dc:creator>Geoff Wild</dc:creator>
      <dc:date>2006-06-02T10:22:37Z</dc:date>
    </item>
    <item>
      <title>Re: Performance Issue?</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/performance-issue/m-p/3798720#M266188</link>
      <description>Hmm, my /etc/fstab entry is:&lt;BR /&gt;&lt;BR /&gt;/dev/vx/dsk/sapdg1/ora001  /oradata/ora001  vxfs rw,suid,largefiles,convosync&lt;BR /&gt;=direct,mincache=direct,datainlog 0 2&lt;BR /&gt;&lt;BR /&gt;I am 1t 11.11 but using VxFS/OJFS 3.5 version which should be your default version at 11i R2.&lt;BR /&gt;&lt;BR /&gt;I think you may have a bad patch or something is different on 11i R2.&lt;BR /&gt;&lt;BR /&gt;Can you try testing your mount via an fstab entry?&lt;BR /&gt;</description>
      <pubDate>Fri, 02 Jun 2006 10:27:15 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/performance-issue/m-p/3798720#M266188</guid>
      <dc:creator>Alzhy</dc:creator>
      <dc:date>2006-06-02T10:27:15Z</dc:date>
    </item>
    <item>
      <title>Re: Performance Issue?</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/performance-issue/m-p/3798721#M266189</link>
      <description>Some more feedback from our dba's:&lt;BR /&gt;&lt;BR /&gt;Here are some stats from a recent statspack report:&lt;BR /&gt;&lt;BR /&gt;Instance Efficiency Percentages (Target 100%)&lt;BR /&gt;~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~&lt;BR /&gt;            Buffer Nowait %:  100.00       Redo NoWait %:    100.00&lt;BR /&gt;            Buffer  Hit   %:   97.18    In-memory Sort %:     99.98&lt;BR /&gt;            Library Hit   %:   98.81        Soft Parse %:     94.14&lt;BR /&gt;         Execute to Parse %:   89.18         Latch Hit %:     99.67&lt;BR /&gt;Parse CPU to Parse Elapsd %:   56.61     % Non-Parse CPU:     83.70&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;Overall the database is running very efficiently - what's hurting is the transactional load against the database and this is coming from the application.&lt;BR /&gt;&lt;BR /&gt;For example - the application was causing one sql select statement to run over 1373 times a second. A change in how the application uses its own caching dropped the execution rate on this sql statement to 149 times a second.&lt;BR /&gt;&lt;BR /&gt;There are a number of other similar candidates with at least 5 other sql statements being executed between 121 -&amp;gt; 649 times a second.&lt;BR /&gt;&lt;BR /&gt;Rgds...Geoff&lt;BR /&gt;&lt;BR /&gt;As far as the OS Buffer cache, this is how I was mounting:&lt;BR /&gt;&lt;BR /&gt;/dev/vg20iprismp/data01 /data/oracle/iprismp/data01 vxfs rw,suid,largefiles,mincache=direct,convosync=direct,delaylog,nodatainlog 0 2&lt;BR /&gt;/dev/vg20iprismp/data02 /data/oracle/iprismp/data02 vxfs rw,suid,largefiles,mincache=direct,convosync=direct,delaylog,nodatainlog 0 2&lt;BR /&gt;/dev/vg20iprismp/data03 /data/oracle/iprismp/data03 vxfs rw,suid,largefiles,mincache=direct,convosync=direct,delaylog,nodatainlog 0 2&lt;BR /&gt;/dev/vg20iprismp/data04 /data/oracle/iprismp/data04 vxfs rw,suid,largefiles,mincache=direct,convosync=direct,delaylog,nodatainlog 0 2&lt;BR /&gt;/dev/vg20iprismp/indx01 /data/oracle/iprismp/indx01 vxfs rw,suid,largefiles,mincache=direct,convosync=direct,delaylog,nodatainlog 0 2&lt;BR /&gt;/dev/vg20iprismp/exports /data/oracle/iprismp/exports vxfs rw,suid,largefiles,delaylog,datainlog 0 2&lt;BR /&gt;/dev/vg20iprismp/arch /data/oracle/iprismp/arch vxfs rw,suid,largefiles,delaylog,datainlog 0 2&lt;BR /&gt;/dev/vg20iprismp/redo01a /data/oracle/iprismp/redo01a vxfs rw,suid,largefiles,delaylog,nodatainlog 0 2&lt;BR /&gt;/dev/vg20iprismp/redo01b /data/oracle/iprismp/redo01b vxfs rw,suid,largefiles,delaylog,nodatainlog 0 2&lt;BR /&gt;/dev/vg20iprismp/redo02a /data/oracle/iprismp/redo02a vxfs rw,suid,largefiles,delaylog,nodatainlog 0 2&lt;BR /&gt;/dev/vg20iprismp/redo02b /data/oracle/iprismp/redo02b vxfs rw,suid,largefiles,delaylog,nodatainlog 0 2&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;How should I be? &lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Fri, 02 Jun 2006 10:33:12 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/performance-issue/m-p/3798721#M266189</guid>
      <dc:creator>Geoff Wild</dc:creator>
      <dc:date>2006-06-02T10:33:12Z</dc:date>
    </item>
  </channel>
</rss>

