<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: F/S Buffer cache &amp;amp; CPU &amp;amp; WIO in Operating System - HP-UX</title>
    <link>https://community.hpe.com/t5/operating-system-hp-ux/f-s-buffer-cache-amp-cpu-amp-wio/m-p/2604030#M929680</link>
    <description>Hi Andy:&lt;BR /&gt;&lt;BR /&gt;If you keep less in the file buffer cache, then I'd expect some "penalty" for having to wait to do an I/O.  I think the important factor is overall performance.  If it's better then you have moved in the right direction.&lt;BR /&gt;&lt;BR /&gt;Regards!&lt;BR /&gt;&lt;BR /&gt;...JRF...</description>
    <pubDate>Tue, 30 Oct 2001 14:54:32 GMT</pubDate>
    <dc:creator>James R. Ferguson</dc:creator>
    <dc:date>2001-10-30T14:54:32Z</dc:date>
    <item>
      <title>F/S Buffer cache &amp; CPU &amp; WIO</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/f-s-buffer-cache-amp-cpu-amp-wio/m-p/2604029#M929679</link>
      <description>The filesystem buffer cache &amp;amp; kernel param dbc_max_pct have been mentioned numerous times on this forum as being, by default, set too high.&lt;BR /&gt;&lt;BR /&gt;My question concerns a N4000/44 4xCpu 4Gb system where dbc_max_pct was reduced from 50% to 15% because of &amp;gt; 95% memory usage, pageouts &amp;amp; deactivations/reactivations and reclaiming memory pages from the f/s cache.. &lt;BR /&gt;&lt;BR /&gt;The subsequent result of this change was a vast improvement in memory usage &amp;lt; 95%, f/s caching levels were still in the high 90 %'s, and a reduction in cpu % system, but there was also an additional 15% cpu WIO (From 14% - 31% avg).&lt;BR /&gt;&lt;BR /&gt;Can anyone explain why this should happen, and should I be concerned about this?</description>
      <pubDate>Tue, 30 Oct 2001 14:39:08 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/f-s-buffer-cache-amp-cpu-amp-wio/m-p/2604029#M929679</guid>
      <dc:creator>Andy Zybert</dc:creator>
      <dc:date>2001-10-30T14:39:08Z</dc:date>
    </item>
    <item>
      <title>Re: F/S Buffer cache &amp; CPU &amp; WIO</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/f-s-buffer-cache-amp-cpu-amp-wio/m-p/2604030#M929680</link>
      <description>Hi Andy:&lt;BR /&gt;&lt;BR /&gt;If you keep less in the file buffer cache, then I'd expect some "penalty" for having to wait to do an I/O.  I think the important factor is overall performance.  If it's better then you have moved in the right direction.&lt;BR /&gt;&lt;BR /&gt;Regards!&lt;BR /&gt;&lt;BR /&gt;...JRF...</description>
      <pubDate>Tue, 30 Oct 2001 14:54:32 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/f-s-buffer-cache-amp-cpu-amp-wio/m-p/2604030#M929680</guid>
      <dc:creator>James R. Ferguson</dc:creator>
      <dc:date>2001-10-30T14:54:32Z</dc:date>
    </item>
    <item>
      <title>Re: F/S Buffer cache &amp; CPU &amp; WIO</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/f-s-buffer-cache-amp-cpu-amp-wio/m-p/2604031#M929681</link>
      <description>Hi,&lt;BR /&gt;&lt;BR /&gt;Your description should be more precise, because concerning the statistics about memory, it depends mostly what kind of soft you use on N4000, what kind of FS you create, i.e :&lt;BR /&gt;&lt;BR /&gt;Buffer Cache is not used in case of Raw Devices for Oracle Database, for example.&lt;BR /&gt;&lt;BR /&gt;VXFS don't use the same way the Buffer Cache than HFS File Systems.&lt;BR /&gt;&lt;BR /&gt;And it can be a normal behavior that 90% of the buffer cache is used : it means that all the pages read from FS are loaded into Buffer cache and read from it afterwards, without direct access from disk and FS. It is better to have a threshold always near 100%, better than 5% which means that the system has to access directly to the disk/fs without using memory.&lt;BR /&gt;&lt;BR /&gt;PJA</description>
      <pubDate>Tue, 30 Oct 2001 14:54:51 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/f-s-buffer-cache-amp-cpu-amp-wio/m-p/2604031#M929681</guid>
      <dc:creator>JACQUET</dc:creator>
      <dc:date>2001-10-30T14:54:51Z</dc:date>
    </item>
    <item>
      <title>Re: F/S Buffer cache &amp; CPU &amp; WIO</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/f-s-buffer-cache-amp-cpu-amp-wio/m-p/2604032#M929682</link>
      <description>Andy,&lt;BR /&gt;&lt;BR /&gt;1. You need to be concerned about %wio. But this particular factor doesn't necessarily mean you need to increase your buffer cache size. with 15% of 4GB which is 600MB you have correctly configured the buffer cache. If you are seeing a lot of %wio even with this buffer cache, it means that your IO subsystem is not adequate and the response times may not be good. So, time for you to concentrate on improvising the disk subsystem by better arranging the logical volumes, finding the hot disks or considering striping etc.,.&lt;BR /&gt;&lt;BR /&gt;2. What about your application?. Has it been improved after reducing the buffer size?. &lt;BR /&gt;&lt;BR /&gt;3. When your %wio is more than 15%, try running sar -d and check for the disks are being used more than 50% and that have high avserv times. You need to move the data from off of those disks onto least used disks.&lt;BR /&gt;&lt;BR /&gt;-Sri</description>
      <pubDate>Tue, 30 Oct 2001 15:12:27 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/f-s-buffer-cache-amp-cpu-amp-wio/m-p/2604032#M929682</guid>
      <dc:creator>Sridhar Bhaskarla</dc:creator>
      <dc:date>2001-10-30T15:12:27Z</dc:date>
    </item>
    <item>
      <title>Re: F/S Buffer cache &amp; CPU &amp; WIO</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/f-s-buffer-cache-amp-cpu-amp-wio/m-p/2604033#M929683</link>
      <description>Sridhar,&lt;BR /&gt;&lt;BR /&gt;I agree with all your comments, but in the absence of any evidence from glance, gpm, sar and all other tools to suggest an I/O bottleneck on the disks, why the sudden increase in cpu % WIO.&lt;BR /&gt;&lt;BR /&gt;The only other option is that the application needs tuning to run more efficiently.&lt;BR /&gt;</description>
      <pubDate>Tue, 30 Oct 2001 15:58:40 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/f-s-buffer-cache-amp-cpu-amp-wio/m-p/2604033#M929683</guid>
      <dc:creator>Andy Zybert</dc:creator>
      <dc:date>2001-10-30T15:58:40Z</dc:date>
    </item>
    <item>
      <title>Re: F/S Buffer cache &amp; CPU &amp; WIO</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/f-s-buffer-cache-amp-cpu-amp-wio/m-p/2604034#M929684</link>
      <description>Andy,&lt;BR /&gt;  &lt;BR /&gt;&amp;gt;&amp;gt;The subsequent result of &amp;gt;&amp;gt;this change was a vast &amp;gt;&amp;gt;improvement in memory usage &amp;gt;&amp;gt;&amp;lt; 95%, &lt;BR /&gt;&lt;BR /&gt;  A good sign.&lt;BR /&gt;&lt;BR /&gt;&amp;gt;&amp;gt;f/s caching levels were &amp;gt;&amp;gt;still in the high 90 %'s, &lt;BR /&gt;&lt;BR /&gt; Which is good, since it means&lt;BR /&gt;the cache is serving the data&lt;BR /&gt;rather than the disk. Higher&lt;BR /&gt;the % the better&lt;BR /&gt;&lt;BR /&gt;&amp;gt;&amp;gt; and a reduction in cpu % &amp;gt;&amp;gt;system, but there was also &amp;gt;&amp;gt;an additional 15% cpu WIO &lt;BR /&gt;&amp;gt;&amp;gt;(From 14% - 31% avg). &lt;BR /&gt; &lt;BR /&gt;   It shows that your system&lt;BR /&gt;is havin I/O intensive applications  which are doing &lt;BR /&gt;a lot of I/O.  If the performance is not acceptable&lt;BR /&gt;to the users , you can look at the disk configuration, &lt;BR /&gt;how the lv''s are setup,&lt;BR /&gt;which disks are busy (sar -d helps),  the type of disks&lt;BR /&gt;being used ?, the connection&lt;BR /&gt;to the disks? (Fibre, scsi?),&lt;BR /&gt;the application itself.&lt;BR /&gt;&lt;BR /&gt;  I think the Buffer cache&lt;BR /&gt;configuration is fine. With&lt;BR /&gt;600Mb configured, you can&lt;BR /&gt;leave it as it is and look&lt;BR /&gt;at the I/O piece.&lt;BR /&gt;&lt;BR /&gt;HTH&lt;BR /&gt;raj</description>
      <pubDate>Tue, 30 Oct 2001 16:08:22 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/f-s-buffer-cache-amp-cpu-amp-wio/m-p/2604034#M929684</guid>
      <dc:creator>Roger Baptiste</dc:creator>
      <dc:date>2001-10-30T16:08:22Z</dc:date>
    </item>
    <item>
      <title>Re: F/S Buffer cache &amp; CPU &amp; WIO</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/f-s-buffer-cache-amp-cpu-amp-wio/m-p/2604035#M929685</link>
      <description>Hi Andy:&lt;BR /&gt;&lt;BR /&gt;To answer your specific question: You are comparing apples to oranges. Yes, %WIO did increase but because you removed a significant&lt;BR /&gt;bigger bottleneck (memory) you now increased the ability of the I/O subsystem to become a bottleneck and thus its role as a bottleneck has increased BUT overall system throughput has gone up. &lt;BR /&gt;&lt;BR /&gt;In essense, you have removed a small pipe now the next smaller pipe plays a bigger role in impeding the flow of water to the faucet.&lt;BR /&gt;&lt;BR /&gt;Clay</description>
      <pubDate>Tue, 30 Oct 2001 16:18:55 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/f-s-buffer-cache-amp-cpu-amp-wio/m-p/2604035#M929685</guid>
      <dc:creator>A. Clay Stephenson</dc:creator>
      <dc:date>2001-10-30T16:18:55Z</dc:date>
    </item>
    <item>
      <title>Re: F/S Buffer cache &amp; CPU &amp; WIO</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/f-s-buffer-cache-amp-cpu-amp-wio/m-p/2604036#M929686</link>
      <description>Andy,&lt;BR /&gt;&lt;BR /&gt;It's always a default cocern about the application. It needs tuning.&lt;BR /&gt;&lt;BR /&gt;The reason why you are seeing the %wio is because the CPU is waiting to get rid of the IO part of the processes. It had successfully completed to process except for the IO portion and is waiting to get into the buffer. If the activity between the buffer cache and the disk subsystem is fast enough, you don't see this sign. If there is too much of buffer cache, cpu can simply dump the IO in the buffer cache and get rid of the process. But this will cause memory to be used and the kernel will spend more time in processing the IO from the buffer to the disks. &lt;BR /&gt;&lt;BR /&gt;The most important thing to look at is the avserv time in sar -d. This is the time in ms it took for that particular LUN/disk to process a request. If it is high, then there is a problem but not necessarily a bottleneck depending on your application. But if %busy is more than 70 and if avserv is having high value it is a bottleneck.&lt;BR /&gt;&lt;BR /&gt;-Sri</description>
      <pubDate>Tue, 30 Oct 2001 16:23:33 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/f-s-buffer-cache-amp-cpu-amp-wio/m-p/2604036#M929686</guid>
      <dc:creator>Sridhar Bhaskarla</dc:creator>
      <dc:date>2001-10-30T16:23:33Z</dc:date>
    </item>
    <item>
      <title>Re: F/S Buffer cache &amp; CPU &amp; WIO</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/f-s-buffer-cache-amp-cpu-amp-wio/m-p/2604037#M929687</link>
      <description>HI&lt;BR /&gt;&lt;BR /&gt;I use Informix &amp;amp; when I see %wio high it is generaly because there are not enough IO threads.  I can increase these threads in the application.  I can also analyse their throughput (in a gross sense, not like MeasureWare).  I do not know, but you might like to look at increasing the IO throughput from the app side.&lt;BR /&gt;&lt;BR /&gt;Tim</description>
      <pubDate>Mon, 11 Feb 2002 15:45:32 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/f-s-buffer-cache-amp-cpu-amp-wio/m-p/2604037#M929687</guid>
      <dc:creator>Tim D Fulford</dc:creator>
      <dc:date>2002-02-11T15:45:32Z</dc:date>
    </item>
  </channel>
</rss>

