<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: VxFSD Physical IO Rate in Operating System - HP-UX</title>
    <link>https://community.hpe.com/t5/operating-system-hp-ux/vxfsd-physical-io-rate/m-p/4242356#M330158</link>
    <description>Looks like you are already investigating some of this.  This doc may help.&lt;BR /&gt;&lt;A href="http://docs.hp.com/en/5992-0732/5992-0732.pdf" target="_blank"&gt;http://docs.hp.com/en/5992-0732/5992-0732.pdf&lt;/A&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;One thought.  &lt;BR /&gt;&lt;BR /&gt;vxfsd is going to manage all vxfs filesystems.  Just because its IO is high and your root disks are high it may not be accurate to blame all of the OS IO on vxfsd.&lt;BR /&gt;&lt;BR /&gt;Something else still needs to be reading/writing to the OS disks to force vxfsd to be active unless there is some huge bug that needs to be patched?  What ?&lt;BR /&gt;&lt;BR /&gt;I see no deactivations taking place with your vmstat, unless I am misaligning the columns.  Yes sometime in the past 4GB was placed on your second device swap area but without PO stats it is just sitting there..&lt;BR /&gt;&lt;BR /&gt;8% dbc_max is ok, this is still 1.2GB, for a database server I would reduce it to 600-800 meg max, give the mem to the application.&lt;BR /&gt;&lt;BR /&gt;A queue of 16+ on the OS disk and 200+ io/s?  Something has to be reading/writing like at a pretty good clip, especially if this is sustained, maybe we need some more sar stats over a longer period ?  Even on my busiest server I typically only see 10-20 ios on my root disk if even.&lt;BR /&gt;&lt;BR /&gt;Got some log files that you have noticed growing like crazy ?&lt;BR /&gt;&lt;BR /&gt;Hate to ask this...  When is the last time you rebooted ?</description>
    <pubDate>Wed, 30 Jul 2008 17:44:05 GMT</pubDate>
    <dc:creator>Tim Nelson</dc:creator>
    <dc:date>2008-07-30T17:44:05Z</dc:date>
    <item>
      <title>VxFSD Physical IO Rate</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/vxfsd-physical-io-rate/m-p/4242351#M330153</link>
      <description>Hi everyone,&lt;BR /&gt; &lt;BR /&gt;     We have an issue with the VxFSD process in one HP-UX box.  This one is running an SAP Central Instance with an Oracle Database.  Basically we have a bottleneck in our internal disks, with an average wait around 60 and avserv around 30.&lt;BR /&gt;&lt;BR /&gt;Average    c1t1d0  100.00   16.54     271    5684   58.69   26.57&lt;BR /&gt;Average    c1t0d0   68.53   12.88     260    5640   30.94   17.28&lt;BR /&gt;&lt;BR /&gt;We don't have our oracle files located in those disks, instead, we have the server attached to an EVA8000.  These internal disks are only for OS filesystems (/home, /opt, /stand, etc) and swap.&lt;BR /&gt;&lt;BR /&gt;We are not seeing paging out in our system, but the swapinfo output looks like this:&lt;BR /&gt;&lt;BR /&gt;             Mb      Mb      Mb   PCT  START/      Mb&lt;BR /&gt;TYPE      AVAIL    USED    FREE  USED   LIMIT RESERVE  PRI  NAME&lt;BR /&gt;dev        4096    3957     139   97%       0       -    1  /dev/vg00/lvol2&lt;BR /&gt;dev       40960    4192   36768   10%       0       -    1  /dev/vg00/swap2&lt;BR /&gt;reserve       -   13593  -13593&lt;BR /&gt;memory    16353    3045   13308   19%&lt;BR /&gt;total     61409   24787   36622   40%       -       0    -&lt;BR /&gt;&lt;BR /&gt;- vmstat sample:&lt;BR /&gt;         procs           memory                   page&lt;BR /&gt;    faults       cpu&lt;BR /&gt;    r     b     w      avm    free   re   at    pi   po    fr   de    sr     in&lt;BR /&gt;    sy    cs  us sy id&lt;BR /&gt;    1     5     0  2371675   73705   19    4     4    0     0    0    12   2407&lt;BR /&gt; 38196  1080   3  4 93&lt;BR /&gt;    3     2     0  1822742   73628   34    2     2    0     0    0     0   1820&lt;BR /&gt; 16363   501  16  2 82&lt;BR /&gt;    3     2     0  1822742   73628   27    1     1    0     0    0     0   2020&lt;BR /&gt; 17412   677   3 11 86&lt;BR /&gt;    3     2     0  1822742   73628   25    2     0    0     0    0     0   1964&lt;BR /&gt; 33100   607   0  1 99&lt;BR /&gt;    3     2     0  1822742   73628   20    1     0    0     0    0     0   1967&lt;BR /&gt; 27591   588   1  2 97&lt;BR /&gt;    3     2     0  1822742   73628   16    0     0    0     0    0     0   1917&lt;BR /&gt; 23275   544   0  1 99&lt;BR /&gt;    3     2     0  1822742   73628   12    0     0    0     0    0     0   1852&lt;BR /&gt; 19620   488   0  1 99&lt;BR /&gt;    3     2     0  1822742   73628    9    0     0    0     0    0     0   1862&lt;BR /&gt; 17392   507   3  1 96&lt;BR /&gt;    3     2     0  1822742   73628    7    0     0    0     0    0     0   1959&lt;BR /&gt; 17825   697   2  2 96&lt;BR /&gt;    3     2     0  1822742   73628    5    0     0    0     0    0     0   1883&lt;BR /&gt; 14771   609   6  1 93&lt;BR /&gt;&lt;BR /&gt;Given all this, we would like to know why the vxfsd process has a sustained IO rate around 500 (I'm attaching the glanceplus screenshot)...&lt;BR /&gt;&lt;BR /&gt;Regards,&lt;BR /&gt;     Luis Angulo</description>
      <pubDate>Tue, 29 Jul 2008 17:15:35 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/vxfsd-physical-io-rate/m-p/4242351#M330153</guid>
      <dc:creator>Luis Ernesto Angulo</dc:creator>
      <dc:date>2008-07-29T17:15:35Z</dc:date>
    </item>
    <item>
      <title>Re: VxFSD Physical IO Rate</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/vxfsd-physical-io-rate/m-p/4242352#M330154</link>
      <description>Two things as you have two separate questions here.&lt;BR /&gt;&lt;BR /&gt;1)  Internal OS disks are typically slow, but maybe not that slow.  There certainly is something on your OS disks generating alot of IO.  What else is on there besides just the OS, what log files ?  I assume your sample was taken over a period of time ?  Review your process IO stats ( other than vxfsd ) then review open files for that process.&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;2)  My first guess, this is only a guess, vxfsd sustained io, flushing buffers?&lt;BR /&gt;What is kmtune or kctune|grep dbc_max ???&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Tue, 29 Jul 2008 19:50:52 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/vxfsd-physical-io-rate/m-p/4242352#M330154</guid>
      <dc:creator>Tim Nelson</dc:creator>
      <dc:date>2008-07-29T19:50:52Z</dc:date>
    </item>
    <item>
      <title>Re: VxFSD Physical IO Rate</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/vxfsd-physical-io-rate/m-p/4242353#M330155</link>
      <description>Thanks for your reply Tim.&lt;BR /&gt;&lt;BR /&gt;1.  The internal disks have, as I said, only OS related files, nothing out of the normal.  SAP binaries, Oracle binaries, datafiles, controlfiles and log files are located out of that volume group (in the EVA array).  I've been monitoring this environment for the last two weeks (I was hired in a new company three weeks ago) and this behavior has been consistent.  &lt;BR /&gt;&lt;BR /&gt;2.  Here are the requested parameters:&lt;BR /&gt;dbc_max_pct                          8  8            Immed&lt;BR /&gt;dbc_min_pct                          5  Default      Immed&lt;BR /&gt;These settings are OK according with the SAP installation guide.  Given that Oracle has its own buffers and SAP as well, OS buffer is set at 8% as a Maximum Value.  &lt;BR /&gt;&lt;BR /&gt;What I was wondering is, why if our server isn't paging out to disk the swapinfo output shows that we have like 4GB used in dev type?</description>
      <pubDate>Tue, 29 Jul 2008 21:14:57 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/vxfsd-physical-io-rate/m-p/4242353#M330155</guid>
      <dc:creator>Luis Ernesto Angulo</dc:creator>
      <dc:date>2008-07-29T21:14:57Z</dc:date>
    </item>
    <item>
      <title>Re: VxFSD Physical IO Rate</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/vxfsd-physical-io-rate/m-p/4242354#M330156</link>
      <description>you have 2 swap partitions on VG00 and they probably are using the same disks.  You could create swap on a different disk that is not on the EVA.  &lt;BR /&gt;&lt;BR /&gt;Evidently you seem to not have enough memory and you are doing a signficant amount of paging in and page outs.</description>
      <pubDate>Wed, 30 Jul 2008 11:09:00 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/vxfsd-physical-io-rate/m-p/4242354#M330156</guid>
      <dc:creator>Emil Velez</dc:creator>
      <dc:date>2008-07-30T11:09:00Z</dc:date>
    </item>
    <item>
      <title>Re: VxFSD Physical IO Rate</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/vxfsd-physical-io-rate/m-p/4242355#M330157</link>
      <description>That's something strange, I've been monitoring the paging out in the same period of time and I haven't seen that happening.  If I launch vmstat for long intervals the "po" column value is consistent in 0, but the internal disks are experimenting higher average wait times in the same period...&lt;BR /&gt;&lt;BR /&gt;The logical volumes assigned to swap are located in the vg00 (internal disks) which are the ones who are experimenting contention.  &lt;BR /&gt;&lt;BR /&gt;Again, if I were experimenting issues with the swap I think that the process with a lot of IO would be "swapper" or "vhand" or something like that, and I would see paging out in some moment... Since my issues are related to the IO caused by the vxfsd process, I've been looking at the kernel parameters related to the JFS, but I don't know if some change here would be a benefit for us:  &lt;BR /&gt;&lt;BR /&gt;vx_ninode 131072&lt;BR /&gt;vxfs_ifree_timelag -1&lt;BR /&gt;&lt;BR /&gt;I tried changing the vx_ninode parameter to 40000 but it didn't help.  This behavior is consistent in the PRD and QAS environment (same configuration).  We are using JSF 3.5 and kernel parameters are configured according with the SAP installation guide (ECC 6.0).  &lt;BR /&gt;&lt;BR /&gt;I don't know what else could I check to avoid this bottleneck in our environments...  &lt;BR /&gt;&lt;BR /&gt;Regards,</description>
      <pubDate>Wed, 30 Jul 2008 17:21:12 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/vxfsd-physical-io-rate/m-p/4242355#M330157</guid>
      <dc:creator>Luis Ernesto Angulo</dc:creator>
      <dc:date>2008-07-30T17:21:12Z</dc:date>
    </item>
    <item>
      <title>Re: VxFSD Physical IO Rate</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/vxfsd-physical-io-rate/m-p/4242356#M330158</link>
      <description>Looks like you are already investigating some of this.  This doc may help.&lt;BR /&gt;&lt;A href="http://docs.hp.com/en/5992-0732/5992-0732.pdf" target="_blank"&gt;http://docs.hp.com/en/5992-0732/5992-0732.pdf&lt;/A&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;One thought.  &lt;BR /&gt;&lt;BR /&gt;vxfsd is going to manage all vxfs filesystems.  Just because its IO is high and your root disks are high it may not be accurate to blame all of the OS IO on vxfsd.&lt;BR /&gt;&lt;BR /&gt;Something else still needs to be reading/writing to the OS disks to force vxfsd to be active unless there is some huge bug that needs to be patched?  What ?&lt;BR /&gt;&lt;BR /&gt;I see no deactivations taking place with your vmstat, unless I am misaligning the columns.  Yes sometime in the past 4GB was placed on your second device swap area but without PO stats it is just sitting there..&lt;BR /&gt;&lt;BR /&gt;8% dbc_max is ok, this is still 1.2GB, for a database server I would reduce it to 600-800 meg max, give the mem to the application.&lt;BR /&gt;&lt;BR /&gt;A queue of 16+ on the OS disk and 200+ io/s?  Something has to be reading/writing like at a pretty good clip, especially if this is sustained, maybe we need some more sar stats over a longer period ?  Even on my busiest server I typically only see 10-20 ios on my root disk if even.&lt;BR /&gt;&lt;BR /&gt;Got some log files that you have noticed growing like crazy ?&lt;BR /&gt;&lt;BR /&gt;Hate to ask this...  When is the last time you rebooted ?</description>
      <pubDate>Wed, 30 Jul 2008 17:44:05 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/vxfsd-physical-io-rate/m-p/4242356#M330158</guid>
      <dc:creator>Tim Nelson</dc:creator>
      <dc:date>2008-07-30T17:44:05Z</dc:date>
    </item>
    <item>
      <title>Re: VxFSD Physical IO Rate</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/vxfsd-physical-io-rate/m-p/4242357#M330159</link>
      <description>&lt;!--!*#--&gt;Hi,&lt;BR /&gt;&lt;BR /&gt;     The server has been up for 73 days.  That isn't much time for me.  I'm blaming the vxfsd process because I've been looking at physical IO rate per process closely (through glance plus) and it is the only one who maintains a sustained rate above 450.  The IO activity over the volume group where the DB resides is very low.  So, besides that process, there's no other one with that much IO... Also, as I said, all of the application log files are outside the internal disks, so I really don't understand why that process is stressing the internal disks the whole day... &lt;BR /&gt;&lt;BR /&gt;     I'm going to search in the bug database to see if I'm hitting a known issue of the VxFS, but honestly I've been researching these last days and I haven't found a similar situation described by somebody else...&lt;BR /&gt;&lt;BR /&gt;     Any other guess or idea is welcome...&lt;BR /&gt;&lt;BR /&gt;Thanks,</description>
      <pubDate>Wed, 30 Jul 2008 19:27:45 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/vxfsd-physical-io-rate/m-p/4242357#M330159</guid>
      <dc:creator>Luis Ernesto Angulo</dc:creator>
      <dc:date>2008-07-30T19:27:45Z</dc:date>
    </item>
    <item>
      <title>Re: VxFSD Physical IO Rate</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/vxfsd-physical-io-rate/m-p/4242358#M330160</link>
      <description>Please post your solution as I have a similar issue with high vxfsd i/o. And the sar shows &lt;BR /&gt;Average    c1t1d0   16.48    9.15      41     570   27.81   20.06&lt;BR /&gt;Average    c1t0d0   16.88    9.56      37     551   25.78   21.72&lt;BR /&gt;</description>
      <pubDate>Wed, 30 Jul 2008 20:10:15 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/vxfsd-physical-io-rate/m-p/4242358#M330160</guid>
      <dc:creator>Tingli</dc:creator>
      <dc:date>2008-07-30T20:10:15Z</dc:date>
    </item>
    <item>
      <title>Re: VxFSD Physical IO Rate</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/vxfsd-physical-io-rate/m-p/4242359#M330161</link>
      <description>I haven't found the solution yet...&lt;BR /&gt;&lt;BR /&gt;At least now I know that the vxfsd process is only writing to our /usr logical volume (raw writes) but I'm still not able to find what's triggering these behavior... &lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Wed, 30 Jul 2008 21:20:35 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/vxfsd-physical-io-rate/m-p/4242359#M330161</guid>
      <dc:creator>Luis Ernesto Angulo</dc:creator>
      <dc:date>2008-07-30T21:20:35Z</dc:date>
    </item>
  </channel>
</rss>

