<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: Understanding System Performance in Operating System - HP-UX</title>
    <link>https://community.hpe.com/t5/operating-system-hp-ux/understanding-system-performance/m-p/3724051#M253447</link>
    <description>Interesting - I have checked the network - have compared different days of the week, and there isn't any noticeable difference.&lt;BR /&gt;&lt;BR /&gt;The login issue is on the SAP side - IE - can't get into the SAPGUI.&lt;BR /&gt;&lt;BR /&gt;Server logins no issue.  &lt;BR /&gt;&lt;BR /&gt;There are hundreds (anywhere from 50 to almost 500!) of ftp jobs every hour - mainly incoming - never a single failure.&lt;BR /&gt;&lt;BR /&gt;We also do anywhere from a dozen to over 200 print jobs an hour - no failures there either.&lt;BR /&gt;&lt;BR /&gt;I too am thinking it is more Oracle related - just want to have all my ducks in a row so to speak.  For example, last week, Oracle changed the amount and size of the redo log files - and we didn't have the issue last friday.&lt;BR /&gt;&lt;BR /&gt;Great suggestions, keep them coming.&lt;BR /&gt;&lt;BR /&gt;Thanks...Geoff&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;</description>
    <pubDate>Mon, 06 Feb 2006 12:51:28 GMT</pubDate>
    <dc:creator>Geoff Wild</dc:creator>
    <dc:date>2006-02-06T12:51:28Z</dc:date>
    <item>
      <title>Understanding System Performance</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/understanding-system-performance/m-p/3724032#M253428</link>
      <description>I'm pretty good when it comes to tuning and understanding system metrics.&lt;BR /&gt;&lt;BR /&gt;I've read things like:&lt;BR /&gt;&lt;BR /&gt;&lt;A href="http://docs.hp.com/en/1219/tuningwp.html" target="_blank"&gt;http://docs.hp.com/en/1219/tuningwp.html&lt;/A&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;A href="http://h21007.www2.hp.com/dspp/files/unprotected/devresource/Docs/TechPapers/UXPerfCookBook.pdf" target="_blank"&gt;http://h21007.www2.hp.com/dspp/files/unprotected/devresource/Docs/TechPapers/UXPerfCookBook.pdf&lt;/A&gt;&lt;BR /&gt;&lt;BR /&gt;I'm struggling with understanding/explaining %wio from sar data and comparing to metrics in MWA.&lt;BR /&gt;&lt;BR /&gt;I also run sarcheck - and it states "no I/O bottle neck"&lt;BR /&gt;&lt;BR /&gt;We've been having a minor performance issue friday mornings between 1 and 3 AM (when international sites are accessing our fairly large SAP/Oracle system).&lt;BR /&gt;&lt;BR /&gt;System is: RP7410, 14GB ram, 5 active cpu's, about 1.5 TB db on DMX 1000 in MC/SG. From an EMC point of view - system is barely making a sweat.&lt;BR /&gt;&lt;BR /&gt;DBA's have notice what appears to be I/O issues from the Oracle side.&lt;BR /&gt;&lt;BR /&gt;From the system side, I see nothing out of the ordinary.&lt;BR /&gt;&lt;BR /&gt;I've attached a fairly long txt file of sar/mwa data. &lt;BR /&gt;&lt;BR /&gt;What I don't understand, id why is %wio fairly high ( &amp;gt;50%) sometimes and yet in mwa data, there is hardly any queueing, interupt cpu is low, etc...&lt;BR /&gt;&lt;BR /&gt;I know from man page, %WIO is idle with some process waiting for I/O (only block I/O, raw I/O, or VM pageins/swapins indicated);&lt;BR /&gt;&lt;BR /&gt;The mwa data in the txt file is quite wide - best to paste into Excel, then "DATA" -&amp;gt; "Text to columns" with | being the delimeter...&lt;BR /&gt;&lt;BR /&gt;Rgds...Geoff&lt;BR /&gt;</description>
      <pubDate>Fri, 03 Feb 2006 12:38:25 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/understanding-system-performance/m-p/3724032#M253428</guid>
      <dc:creator>Geoff Wild</dc:creator>
      <dc:date>2006-02-03T12:38:25Z</dc:date>
    </item>
    <item>
      <title>Re: Understanding System Performance</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/understanding-system-performance/m-p/3724033#M253429</link>
      <description>Your SAR stats appear healthy to me. THose %wio values are actually normal as they are on ours -- we consistently have &amp;gt;20% %wio on our environments (Oracle/DB CRM apps - &amp;gt;10 CPUs, &amp;gt;32GB memory and 3+TB instances). During backups.. we consistently get above 00% %wio.&lt;BR /&gt;&lt;BR /&gt;Your best guage if you've an I/O problem is "sar -d" and check for disks that have queelentgh in excess of 0.&lt;BR /&gt;&lt;BR /&gt;Do you have vmstat output as well from sarcheck?&lt;BR /&gt;&lt;BR /&gt;What you are possibly facing is an Oracle tuning issue. Possibly SGA sizing needs to be re-studied.&lt;BR /&gt;</description>
      <pubDate>Fri, 03 Feb 2006 14:00:14 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/understanding-system-performance/m-p/3724033#M253429</guid>
      <dc:creator>Alzhy</dc:creator>
      <dc:date>2006-02-03T14:00:14Z</dc:date>
    </item>
    <item>
      <title>Re: Understanding System Performance</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/understanding-system-performance/m-p/3724034#M253430</link>
      <description>I just want to add that iowait can also be network related or an application problem (maybe deadlock).</description>
      <pubDate>Fri, 03 Feb 2006 14:08:35 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/understanding-system-performance/m-p/3724034#M253430</guid>
      <dc:creator>Ivan Ferreira</dc:creator>
      <dc:date>2006-02-03T14:08:35Z</dc:date>
    </item>
    <item>
      <title>Re: Understanding System Performance</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/understanding-system-performance/m-p/3724035#M253431</link>
      <description>Queue length for all disks (except root) seems to be fixed at 0.50 no matter what...once in a while - one or 2 hit 0.54&lt;BR /&gt;&lt;BR /&gt;My local disks, on average over that 2.5 hour period were nill/0.5 and 6.42/4.95.&lt;BR /&gt;&lt;BR /&gt;The Local disks are 15K rpm and are mirrored across the controllers.&lt;BR /&gt;&lt;BR /&gt;Strange that vg00, which is c28t5d0 and c0t6d0 have different queues?&lt;BR /&gt;&lt;BR /&gt;vg01 contains additional swap and /var/adm/crash - which really aren't used...so that explains their lack of stats...&lt;BR /&gt;&lt;BR /&gt;Rgds...Geoff&lt;BR /&gt; &lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Fri, 03 Feb 2006 14:42:21 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/understanding-system-performance/m-p/3724035#M253431</guid>
      <dc:creator>Geoff Wild</dc:creator>
      <dc:date>2006-02-03T14:42:21Z</dc:date>
    </item>
    <item>
      <title>Re: Understanding System Performance</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/understanding-system-performance/m-p/3724036#M253432</link>
      <description>&amp;gt;&amp;gt;&amp;gt; What I don't understand, id why is %wio fairly high ( &amp;gt;50%) sometimes and yet in mwa data, there is hardly any queueing&lt;BR /&gt;&lt;BR /&gt;This would be a typical picture when a small number (1 ?) of processes is reading through a lot of not-recently-used data, for example for a report.&lt;BR /&gt;Basically the process will be doing read-compute-read-compute for a long time. The compute is likely smallish compared to the io completion time. There is no chance for an IO queue because the process will only generate the next read after the compute for a prior one is done.&lt;BR /&gt;Only parallel queries would change this.&lt;BR /&gt;&lt;BR /&gt;Now in day time, when the system gets busier, such one processes wait time is filled but by compute cycles for other processes and thus on a macro/system level will be labeled 'cpu busy', but on a micro/process level the wait is still happening. And those other processes can also issue more and independend / concurrent IOs, generating the IO queues.&lt;BR /&gt;&lt;BR /&gt;So I concur with other observations that the system may simply be doing what it is supposed to be doing. I would however make sure to run an Oracle statspack with SNAPs bracketing the 1am - 3am window to doublecheck it is simply busy. Specifically I would verify that the average IO time is similar to the day time average, suggesting that the wait is normal but just more visible. And glance over the top queries of course.&lt;BR /&gt;&lt;BR /&gt;I recently helped with a system with a similar complaint with high waits.  It turned out that the SAN device was shared and other systems created excessive (BACKUP) load to it. This caused the IO response time for the system we were looking at to degrade to a point where it impacted performance.&lt;BR /&gt;&lt;BR /&gt;Actually.. we were kinda lucky catching that.&lt;BR /&gt;For us it was a 1/2 hour glitch in a 2 hour run that was done to validate a supposed performance boost.&lt;BR /&gt;Had the situation been reversed... the glitch been 2 hours for an 1/2 hour test, then we might have falsly concluded that the performance improvement was broken. As it was we saw the improvement in general, just needed to explain the glitch.&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Fri, 03 Feb 2006 14:44:44 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/understanding-system-performance/m-p/3724036#M253432</guid>
      <dc:creator>Hein van den Heuvel</dc:creator>
      <dc:date>2006-02-03T14:44:44Z</dc:date>
    </item>
    <item>
      <title>Re: Understanding System Performance</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/understanding-system-performance/m-p/3724037#M253433</link>
      <description>Hmm.. &lt;BR /&gt;Which disk are you getting queueing of 6.42/4.95?  The c28t5d0 one or the c0t6d0 one which is the internal one? Is it c28t5d0 which is possibly on an external SCSI enclosure? &lt;BR /&gt;&lt;BR /&gt;If so, what HBA is it connected to? Is it a combo U320/GigE one?&lt;BR /&gt;</description>
      <pubDate>Fri, 03 Feb 2006 15:04:57 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/understanding-system-performance/m-p/3724037#M253433</guid>
      <dc:creator>Alzhy</dc:creator>
      <dc:date>2006-02-03T15:04:57Z</dc:date>
    </item>
    <item>
      <title>Re: Understanding System Performance</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/understanding-system-performance/m-p/3724038#M253434</link>
      <description>Both are internal scsi - on seperate controllers...</description>
      <pubDate>Fri, 03 Feb 2006 16:31:31 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/understanding-system-performance/m-p/3724038#M253434</guid>
      <dc:creator>Geoff Wild</dc:creator>
      <dc:date>2006-02-03T16:31:31Z</dc:date>
    </item>
    <item>
      <title>Re: Understanding System Performance</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/understanding-system-performance/m-p/3724039#M253435</link>
      <description>But which one is experiencing the queuing? Owult it be possible an Ignite Backup to a local DDS3/4 Tape is going on at those times and since the tape is possibly chained to the same SCSI drive. This'll explain the queueing.&lt;BR /&gt;</description>
      <pubDate>Fri, 03 Feb 2006 16:41:26 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/understanding-system-performance/m-p/3724039#M253435</guid>
      <dc:creator>Alzhy</dc:creator>
      <dc:date>2006-02-03T16:41:26Z</dc:date>
    </item>
    <item>
      <title>Re: Understanding System Performance</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/understanding-system-performance/m-p/3724040#M253436</link>
      <description>Nope - we ignite over net - and not at that time....both have queueing....</description>
      <pubDate>Fri, 03 Feb 2006 16:52:55 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/understanding-system-performance/m-p/3724040#M253436</guid>
      <dc:creator>Geoff Wild</dc:creator>
      <dc:date>2006-02-03T16:52:55Z</dc:date>
    </item>
    <item>
      <title>Re: Understanding System Performance</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/understanding-system-performance/m-p/3724041#M253437</link>
      <description>Then get stats for what lvol is busy/ or is possibly introducing the queuing -- "glance -i" or from measureware archives..&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Fri, 03 Feb 2006 16:59:02 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/understanding-system-performance/m-p/3724041#M253437</guid>
      <dc:creator>Alzhy</dc:creator>
      <dc:date>2006-02-03T16:59:02Z</dc:date>
    </item>
    <item>
      <title>Re: Understanding System Performance</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/understanding-system-performance/m-p/3724042#M253438</link>
      <description>Well...looks like /usr and sometimes /opt&lt;BR /&gt;&lt;BR /&gt;According to sarcheck:&lt;BR /&gt;&lt;BR /&gt;The disk device c28t5d0 was busy an average of 8.79 percent of the time and had an average queue depth of 2.5 (when occupied). This indicates that the device is not a performance bottleneck. The average service time reported for this device and its accompanying disk subsystem was 8.1 milliseconds. This is relatively fast. Service time is the delay between the time a request was sent to a device and the time that the device signaled completion of the request. The disk device c28t5d0 was reported by pvdisplay as being a 33.91 gigabyte disk. 14224 megabytes of space was reported as being free and 20496 megabytes have been allocated. This disk device was a part of volume group /dev/vg00 and contained 15 logical volumes. At least one logical volume occupied noncontiguous physical extents on the disk. Performance will suffer when logical volumes are busy and not mirrored because the disk's read/write heads are likely to travel back and forth in an inefficient manner. &lt;BR /&gt;&lt;BR /&gt;Logical volume /dev/vg00/lvol6, 949 block gap &lt;BR /&gt;Logical volume /dev/vg00/lvol6, 1359 block gap &lt;BR /&gt;Logical volume /dev/vg00/lvol7, 1245 block gap &lt;BR /&gt;Logical volume /dev/vg00/lvol9, 724 block gap &lt;BR /&gt;Logical volume /dev/vg00/lvol6, 1669 block gap &lt;BR /&gt;Logical volume /dev/vg00/lvol6, 353 block gap &lt;BR /&gt;Logical volume /dev/vg00/lvol6, 663 block gap &lt;BR /&gt;Logical volume /dev/vg00/lvol6, 247 block gap &lt;BR /&gt;&lt;BR /&gt;The disk device c0t6d0 was busy an average of 11.54 percent of the time and had an average queue depth of 1.9 (when occupied). This indicates that the device is not a performance bottleneck. The average service time reported for this device and its accompanying disk subsystem was 6.1 milliseconds. This is relatively fast. The disk device c0t6d0 was reported by pvdisplay as being a 33.91 gigabyte disk. 14224 megabytes of space was reported as being free and 20496 megabytes have been allocated. This disk device was a part of volume group /dev/vg00 and contained 15 logical volumes. At least one logical volume occupied noncontiguous physical extents on the disk. &lt;BR /&gt;&lt;BR /&gt;Logical volume /dev/vg00/lvol6, 1547 block gap&lt;BR /&gt;&lt;BR /&gt;So, /opt has some gaps - because it has been extended a few times...&lt;BR /&gt;&lt;BR /&gt;Thanks for the info so far - points will be assigned at a later date...&lt;BR /&gt;&lt;BR /&gt;Rgds...Geoff</description>
      <pubDate>Fri, 03 Feb 2006 17:58:11 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/understanding-system-performance/m-p/3724042#M253438</guid>
      <dc:creator>Geoff Wild</dc:creator>
      <dc:date>2006-02-03T17:58:11Z</dc:date>
    </item>
    <item>
      <title>Re: Understanding System Performance</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/understanding-system-performance/m-p/3724043#M253439</link>
      <description>I'm thinking of changing the mount options to:&lt;BR /&gt;&lt;BR /&gt;delaylog, nodatainlog, mincache=direct, convosync=direct &lt;BR /&gt;&lt;BR /&gt;for Oracle redo and data files.&lt;BR /&gt;&lt;BR /&gt;What do you think?&lt;BR /&gt;&lt;BR /&gt;Rgds...Geoff&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Mon, 06 Feb 2006 10:47:02 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/understanding-system-performance/m-p/3724043#M253439</guid>
      <dc:creator>Geoff Wild</dc:creator>
      <dc:date>2006-02-06T10:47:02Z</dc:date>
    </item>
    <item>
      <title>Re: Understanding System Performance</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/understanding-system-performance/m-p/3724044#M253440</link>
      <description>Hmm... are you saying your Oracle mountpoints have never been on directIO and all this time were cached (doubly)? Then I would think that may very well be contributory to the problem you're experiencing. Best practice for Oracle storage on cooked filesystems have always been to enable directIO on the Oracle datafile storage filesystems (OJFS/VxFS)...&lt;BR /&gt;</description>
      <pubDate>Mon, 06 Feb 2006 10:56:43 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/understanding-system-performance/m-p/3724044#M253440</guid>
      <dc:creator>Alzhy</dc:creator>
      <dc:date>2006-02-06T10:56:43Z</dc:date>
    </item>
    <item>
      <title>Re: Understanding System Performance</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/understanding-system-performance/m-p/3724045#M253441</link>
      <description>Hi Geoff,&lt;BR /&gt;&lt;BR /&gt;Like Nelson i agree with your suggested mount option change, but I do not expect a positive impact on the problem described.&lt;BR /&gt;&lt;BR /&gt;Those options will avoid double buffering and with that reduce CPU and Memory presure, but it will not reduce IOs, it just makes the (cpu) path for the  IOs shorter.&lt;BR /&gt;There is even a risk of increased IO load, if it runs out that your SGA(s) were under allocated and the buffer cache was actively helping out avoiding IOs.&lt;BR /&gt;&lt;BR /&gt;Maybe I am a little slow here, but please help me understand why you think that is a problem in the first place. &lt;BR /&gt;Sure, you have some WIO time. So what? The system is busy waiting for an IO to come through and has nothing else to do. Great! No problem. The only way to make that better is to teach your system to look into the future and pre-fetch the data which the application is going to need next. Not a minor task.&lt;BR /&gt;Ok, I am obviouslly a little sarcastic here, but seriously is there for example user feedback that the end-user performance is not where it is expected to be? Has that been qualified and quantified?&lt;BR /&gt;&lt;BR /&gt;Cheers,&lt;BR /&gt;Hein.</description>
      <pubDate>Mon, 06 Feb 2006 11:49:45 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/understanding-system-performance/m-p/3724045#M253441</guid>
      <dc:creator>Hein van den Heuvel</dc:creator>
      <dc:date>2006-02-06T11:49:45Z</dc:date>
    </item>
    <item>
      <title>Re: Understanding System Performance</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/understanding-system-performance/m-p/3724046#M253442</link>
      <description>Yes, the performance has been quantified - during the night, we have several batch jobs...but because we have users in multiple time zones, users who sign in at 1:00 AM MST, are having a degration in service - sometimes up to few minutes after hitting enter.  Sometimes, they can't even log in...Statspack has been run...&lt;BR /&gt;&lt;BR /&gt;Strange thing is, only happens Fridays, and yet Friday is no different from any other day as far as number and type of batch jobs...&lt;BR /&gt;&lt;BR /&gt;And yes, right now we have no mount options (carry over from original Service Guard setup).&lt;BR /&gt;&lt;BR /&gt;Rgds...Geoff&lt;BR /&gt;</description>
      <pubDate>Mon, 06 Feb 2006 11:56:36 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/understanding-system-performance/m-p/3724046#M253442</guid>
      <dc:creator>Geoff Wild</dc:creator>
      <dc:date>2006-02-06T11:56:36Z</dc:date>
    </item>
    <item>
      <title>Re: Understanding System Performance</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/understanding-system-performance/m-p/3724047#M253443</link>
      <description>Geoff, &lt;BR /&gt;&lt;BR /&gt;Please allow me to go off the map a little (OK  a lot) - &lt;BR /&gt;&lt;BR /&gt;I recently had this too, and it was a problem with... (believe or not) entries in a rarp table!  Is there maintenance(switching from primary to alternate, or vice-versa, or backups, or bouncing services or servers etc) on rarp(dns) machines/servers/services at this time on Friday nights, maybe for backups or regularly scheduled maintenance?&lt;BR /&gt;&lt;BR /&gt;I know that's a long shot (so much so that I'm reluctant to mention it), but it may be worth checking out...</description>
      <pubDate>Mon, 06 Feb 2006 12:04:15 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/understanding-system-performance/m-p/3724047#M253443</guid>
      <dc:creator>TwoProc</dc:creator>
      <dc:date>2006-02-06T12:04:15Z</dc:date>
    </item>
    <item>
      <title>Re: Understanding System Performance</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/understanding-system-performance/m-p/3724048#M253444</link>
      <description>John - Interesting - but no.  Regular maintenance window (though not always entirely used) is on Saturday 7 PM - Sunday 5 AM.&lt;BR /&gt;&lt;BR /&gt;DNS is fine - no swithing...&lt;BR /&gt;&lt;BR /&gt;Rgds...Geoff</description>
      <pubDate>Mon, 06 Feb 2006 12:07:14 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/understanding-system-performance/m-p/3724048#M253444</guid>
      <dc:creator>Geoff Wild</dc:creator>
      <dc:date>2006-02-06T12:07:14Z</dc:date>
    </item>
    <item>
      <title>Re: Understanding System Performance</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/understanding-system-performance/m-p/3724049#M253445</link>
      <description>Nothing is really jumping out at me in all of this, but a seat of the pants feeling has me suspecting the network. I wouldn't expect changing mount options to have a huge impact and certainly not to something that appears to be so time related.&lt;BR /&gt;&lt;BR /&gt;Since you have measureware, I suggest that you get the output from a "good" time interval and a "bad" one and compare them. I would use a fairly short sampling period.&lt;BR /&gt;&lt;BR /&gt;The one thing I would look very closely at is the batch jobs that are being run at this time. Is there a unique batch job that is being run? Perhaps one that ran great a few months ago until someone deleted a "useless" index? Is there database maintenance during this time? Possibly deleting and recreating an index so that queries might be sequential during this interval? Are there any vxfs snapshots at this time? &lt;BR /&gt;&lt;BR /&gt;Oh, and don't overlook something that could cause this kind of problem as the machine loads -- a bad timeslice setting.</description>
      <pubDate>Mon, 06 Feb 2006 12:23:16 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/understanding-system-performance/m-p/3724049#M253445</guid>
      <dc:creator>A. Clay Stephenson</dc:creator>
      <dc:date>2006-02-06T12:23:16Z</dc:date>
    </item>
    <item>
      <title>Re: Understanding System Performance</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/understanding-system-performance/m-p/3724050#M253446</link>
      <description>John may be on to something.&lt;BR /&gt;The problem might not be on the box, and for sure not in the Oracle side of the box.  The network would be the suspect. It wouldn't be a case of the old 'cleaning service jokes?! ( every evening at 1am a new guard/cleaning service shift begins, and they unplug a router to plug in a coffee maker, removing all traces of that activity by 3am as they go :-)&lt;BR /&gt;&lt;BR /&gt;Specifically when you mention that "Sometimes, they can't even log in...".&lt;BR /&gt;&lt;BR /&gt;Is that logging in to HPUX, or maybe makeing an Oracle Listener connections. Do you see anything at the hpux level (memoy, swap space, process count) which might slow down process creation?&lt;BR /&gt;&lt;BR /&gt;Maybe you can come of with a silly benchmark process where you have two streams of logins every 5 or 10 minutes. One originated locally on the box not requiring any physical network, just logical and the other from a select international site.&lt;BR /&gt;Each localy measures and records response time every attempt, highlighting the UTC time-of-day when the response times is substandard.&lt;BR /&gt;After a few days you compare the results.&lt;BR /&gt;&lt;BR /&gt;fwiw,&lt;BR /&gt;Hein.&lt;BR /&gt;</description>
      <pubDate>Mon, 06 Feb 2006 12:26:10 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/understanding-system-performance/m-p/3724050#M253446</guid>
      <dc:creator>Hein van den Heuvel</dc:creator>
      <dc:date>2006-02-06T12:26:10Z</dc:date>
    </item>
    <item>
      <title>Re: Understanding System Performance</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/understanding-system-performance/m-p/3724051#M253447</link>
      <description>Interesting - I have checked the network - have compared different days of the week, and there isn't any noticeable difference.&lt;BR /&gt;&lt;BR /&gt;The login issue is on the SAP side - IE - can't get into the SAPGUI.&lt;BR /&gt;&lt;BR /&gt;Server logins no issue.  &lt;BR /&gt;&lt;BR /&gt;There are hundreds (anywhere from 50 to almost 500!) of ftp jobs every hour - mainly incoming - never a single failure.&lt;BR /&gt;&lt;BR /&gt;We also do anywhere from a dozen to over 200 print jobs an hour - no failures there either.&lt;BR /&gt;&lt;BR /&gt;I too am thinking it is more Oracle related - just want to have all my ducks in a row so to speak.  For example, last week, Oracle changed the amount and size of the redo log files - and we didn't have the issue last friday.&lt;BR /&gt;&lt;BR /&gt;Great suggestions, keep them coming.&lt;BR /&gt;&lt;BR /&gt;Thanks...Geoff&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Mon, 06 Feb 2006 12:51:28 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/understanding-system-performance/m-p/3724051#M253447</guid>
      <dc:creator>Geoff Wild</dc:creator>
      <dc:date>2006-02-06T12:51:28Z</dc:date>
    </item>
  </channel>
</rss>

