<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: sar -d heavy load on root disk in Operating System - HP-UX</title>
    <link>https://community.hpe.com/t5/operating-system-hp-ux/sar-d-heavy-load-on-root-disk/m-p/2559403#M918218</link>
    <description>Hi Nancy:&lt;BR /&gt;&lt;BR /&gt;Sometimes memory is the critical section for perfermance, however, only according to sar -d, we can not 100 percent say it is memory problem. &lt;BR /&gt;&lt;BR /&gt;Try to use sar -du 5 5&lt;BR /&gt;see 'avque', '%wio'&lt;BR /&gt;&lt;BR /&gt;avque average number of requests waiting to access the device. A disk queue with a length greater than three often mean I/O requests will spend more time waiting in the queue than actually being serviced&lt;BR /&gt;&lt;BR /&gt;%wio idle with some process waiting for I/O(Only block I/O, raw I/O, or VM pageins/swapins indicated), the performance will slow if the number is over 30, possibly it is I/O bottleneck&lt;BR /&gt;&lt;BR /&gt;vmstat &lt;BR /&gt;if the 'po'&amp;gt;0 and 'free'*4&amp;gt;2M, that means it is a memory bottleneck.</description>
    <pubDate>Tue, 31 Jul 2001 17:53:27 GMT</pubDate>
    <dc:creator>Victor_5</dc:creator>
    <dc:date>2001-07-31T17:53:27Z</dc:date>
    <item>
      <title>sar -d heavy load on root disk</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/sar-d-heavy-load-on-root-disk/m-p/2559402#M918217</link>
      <description>sar -d is showing heavy load on my root disk.  I am wondering what would cause this.  The load is higher when xpath is pulling data from the database and transfering it to a mainframe.  Could memory be an issue?  Output from sar -d&lt;BR /&gt;Average    c0t5d0    2.63   86.62       4      32  249.27   16.71     &lt;BR /&gt;&lt;BR /&gt;Any help would be appreciated.</description>
      <pubDate>Tue, 31 Jul 2001 17:33:54 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/sar-d-heavy-load-on-root-disk/m-p/2559402#M918217</guid>
      <dc:creator>nancy rippey</dc:creator>
      <dc:date>2001-07-31T17:33:54Z</dc:date>
    </item>
    <item>
      <title>Re: sar -d heavy load on root disk</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/sar-d-heavy-load-on-root-disk/m-p/2559403#M918218</link>
      <description>Hi Nancy:&lt;BR /&gt;&lt;BR /&gt;Sometimes memory is the critical section for perfermance, however, only according to sar -d, we can not 100 percent say it is memory problem. &lt;BR /&gt;&lt;BR /&gt;Try to use sar -du 5 5&lt;BR /&gt;see 'avque', '%wio'&lt;BR /&gt;&lt;BR /&gt;avque average number of requests waiting to access the device. A disk queue with a length greater than three often mean I/O requests will spend more time waiting in the queue than actually being serviced&lt;BR /&gt;&lt;BR /&gt;%wio idle with some process waiting for I/O(Only block I/O, raw I/O, or VM pageins/swapins indicated), the performance will slow if the number is over 30, possibly it is I/O bottleneck&lt;BR /&gt;&lt;BR /&gt;vmstat &lt;BR /&gt;if the 'po'&amp;gt;0 and 'free'*4&amp;gt;2M, that means it is a memory bottleneck.</description>
      <pubDate>Tue, 31 Jul 2001 17:53:27 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/sar-d-heavy-load-on-root-disk/m-p/2559403#M918218</guid>
      <dc:creator>Victor_5</dc:creator>
      <dc:date>2001-07-31T17:53:27Z</dc:date>
    </item>
    <item>
      <title>Re: sar -d heavy load on root disk</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/sar-d-heavy-load-on-root-disk/m-p/2559404#M918219</link>
      <description>Shawn,&lt;BR /&gt;Thanks for the info.&lt;BR /&gt;sar -du output is&lt;BR /&gt;Average    c0t5d0    2.02   64.63       3      26  187.06   17.05  &lt;BR /&gt; avq wio of 64.63 looks pretty bad.&lt;BR /&gt;from vmstat po=0 and free=109560 &lt;BR /&gt;From the sar -du I guess I can figure on a disk bottle neck but with this being my root drive I am at a lose as how redistrubute resources.&lt;BR /&gt;</description>
      <pubDate>Tue, 31 Jul 2001 18:11:25 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/sar-d-heavy-load-on-root-disk/m-p/2559404#M918219</guid>
      <dc:creator>nancy rippey</dc:creator>
      <dc:date>2001-07-31T18:11:25Z</dc:date>
    </item>
    <item>
      <title>Re: sar -d heavy load on root disk</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/sar-d-heavy-load-on-root-disk/m-p/2559405#M918220</link>
      <description>Hi Nancy:&lt;BR /&gt;&lt;BR /&gt;It looks like it is your I/O problem. Yes, regarding performance tuning, it is hard for us to do something on running systems for I/O bottleneck, especially for 24*7 shops. The logic is try to balance the work load on different channnels and different disks, however, you will do a lot of work and those changes will definitely affect your business.&lt;BR /&gt;&lt;BR /&gt;Two ideas for about it:&lt;BR /&gt;1. add memory &lt;BR /&gt;Generally, it is the easiest way although memory is not your main bottleneck, you will see a big performance improvement after that.&lt;BR /&gt;&lt;BR /&gt;2. banlance your work load&lt;BR /&gt;I mean, try to banlance the I/O on your root disk, move those non-critical tasks to another disk, it is varied deponds on your working environment.&lt;BR /&gt;&lt;BR /&gt;Hope it can helps.</description>
      <pubDate>Tue, 31 Jul 2001 18:40:03 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/sar-d-heavy-load-on-root-disk/m-p/2559405#M918220</guid>
      <dc:creator>Victor_5</dc:creator>
      <dc:date>2001-07-31T18:40:03Z</dc:date>
    </item>
    <item>
      <title>Re: sar -d heavy load on root disk</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/sar-d-heavy-load-on-root-disk/m-p/2559406#M918221</link>
      <description>Hi Nancy,&lt;BR /&gt;&lt;BR /&gt;I found a few documents on HP's Knowledge Base that were quite helpful in determining where bottlenecks were occuring on a system.  If you search the knowledge base, look for the following document IDs.&lt;BR /&gt;&lt;BR /&gt;S3100002312A&lt;BR /&gt;S3100002312B&lt;BR /&gt;S3100002312C&lt;BR /&gt;&lt;BR /&gt;The document is titled "Sys Adm: determining the cause of system performance problems".&lt;BR /&gt;&lt;BR /&gt;If you can, try moving non-root specific data or applications from your root disks.  This could include users home directories.  If you have any secondary swap devices configured, you may want to move them to another disk.&lt;BR /&gt;&lt;BR /&gt;Regards,&lt;BR /&gt;Dave</description>
      <pubDate>Wed, 01 Aug 2001 00:57:25 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/sar-d-heavy-load-on-root-disk/m-p/2559406#M918221</guid>
      <dc:creator>David Allen</dc:creator>
      <dc:date>2001-08-01T00:57:25Z</dc:date>
    </item>
    <item>
      <title>Re: sar -d heavy load on root disk</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/sar-d-heavy-load-on-root-disk/m-p/2559407#M918222</link>
      <description>Hi&lt;BR /&gt;&lt;BR /&gt;Just a couple of things&lt;BR /&gt;&lt;BR /&gt;1/ do you have glance or MewasuerWare?  If so you can look at the IO's per filesystem.  In glance it is &lt;BR /&gt;&lt;BR /&gt;glance -i&lt;BR /&gt;&lt;BR /&gt;For MeasureWare there are metrics beginning with LV_ (say LV_READ_RATE, LV_WRITE_RATE, LV_BYTE_RATE)&lt;BR /&gt;&lt;BR /&gt;Also What is the average time taken for each IO.  I assume your disk utilisation is 100% so all you need to do is find out the IO rate for the root disk. &lt;BR /&gt;&lt;BR /&gt;IOtime[ms] = disk_util[%]*10/IO_Rt[IO/s]&lt;BR /&gt;&lt;BR /&gt;So say you have an IO rate of 100 IO/s then the average time per IO would be 10ms/IO (which is not too good).  This would suggest thrashing.&lt;BR /&gt;&lt;BR /&gt;2/ are there only the standard filesystems on vg00?  If there are any others or any database raw LV's look at them&lt;BR /&gt;&lt;BR /&gt;3/ Do you have more than one swap area in vg00?  If so, &amp;amp; the priorties are wrong, you could also get thrashing.&lt;BR /&gt;&lt;BR /&gt;Phew&lt;BR /&gt;&lt;BR /&gt;Good hunting&lt;BR /&gt;&lt;BR /&gt;Tim</description>
      <pubDate>Wed, 01 Aug 2001 09:44:12 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/sar-d-heavy-load-on-root-disk/m-p/2559407#M918222</guid>
      <dc:creator>Tim D Fulford</dc:creator>
      <dc:date>2001-08-01T09:44:12Z</dc:date>
    </item>
    <item>
      <title>Re: sar -d heavy load on root disk</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/sar-d-heavy-load-on-root-disk/m-p/2559408#M918223</link>
      <description>You may want to see if you're swapping much.. swapinfo.&lt;BR /&gt;&lt;BR /&gt;You mich also consider rearranging the distribution of your vg00 lvols to balance the data across more than one disk, on ideally more than one scsi bus.&lt;BR /&gt;&lt;BR /&gt;Make sure your root disk has a high scsi id relative to the scsi bus elements that its sharing with.&lt;BR /&gt;&lt;BR /&gt;Analyse your&lt;BR /&gt;ioscan -fnk&lt;BR /&gt;strings /etc/lvmtab&lt;BR /&gt;and vgdisplay -v vg00&lt;BR /&gt;&lt;BR /&gt;Later,&lt;BR /&gt;Bill</description>
      <pubDate>Wed, 01 Aug 2001 09:54:06 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/sar-d-heavy-load-on-root-disk/m-p/2559408#M918223</guid>
      <dc:creator>Bill McNAMARA_1</dc:creator>
      <dc:date>2001-08-01T09:54:06Z</dc:date>
    </item>
    <item>
      <title>Re: sar -d heavy load on root disk</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/sar-d-heavy-load-on-root-disk/m-p/2559409#M918224</link>
      <description>Nancy&lt;BR /&gt;&lt;BR /&gt;As Lt Columbo says, "Just one more thing!"&lt;BR /&gt;&lt;BR /&gt;you supplied a sar -du  5 5 output but forgot the bottom line.  &lt;BR /&gt;&lt;BR /&gt;It is my suspicion that the bottom line will have a low %usr &amp;amp; high %sys and %idle will be zero (I expect %wio will also be high).  If this is the case then you are having.&lt;BR /&gt;&lt;BR /&gt;But according to your blk/s (32|26)you have an average time per IO of about 30+ ms/IO(depending on which sar -du you take, &amp;amp; assuming %idle is 0).  If this is the case&lt;BR /&gt;Your disk IS THRASHING.&lt;BR /&gt;&lt;BR /&gt;Just thought you you might like to know&lt;BR /&gt;&lt;BR /&gt;Tim&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Wed, 01 Aug 2001 10:17:59 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/sar-d-heavy-load-on-root-disk/m-p/2559409#M918224</guid>
      <dc:creator>Tim D Fulford</dc:creator>
      <dc:date>2001-08-01T10:17:59Z</dc:date>
    </item>
    <item>
      <title>Re: sar -d heavy load on root disk</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/sar-d-heavy-load-on-root-disk/m-p/2559410#M918225</link>
      <description>Thanks for all the input.  My root volume contains only the standard filesystems, thrashing is occuring occasionally, I only have my primary swap and I have not had any problems w/swapping  My current plan is to have additional memory installed next weekend.  I will let you know how performance increases.  These performance issues are always so fun to figure out!</description>
      <pubDate>Wed, 01 Aug 2001 13:54:23 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/sar-d-heavy-load-on-root-disk/m-p/2559410#M918225</guid>
      <dc:creator>nancy rippey</dc:creator>
      <dc:date>2001-08-01T13:54:23Z</dc:date>
    </item>
  </channel>
</rss>

