<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Hard VS. Soft Page Faults in Operating System - OpenVMS</title>
    <link>https://community.hpe.com/t5/operating-system-openvms/hard-vs-soft-page-faults/m-p/3427301#M65668</link>
    <description>Is there a way using MONITOR to be able to differentiate between hard/soft page faults?&lt;BR /&gt;&lt;BR /&gt;Chaim</description>
    <pubDate>Mon, 22 Nov 2004 04:57:57 GMT</pubDate>
    <dc:creator>Chaim Budnick</dc:creator>
    <dc:date>2004-11-22T04:57:57Z</dc:date>
    <item>
      <title>Hard VS. Soft Page Faults</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/hard-vs-soft-page-faults/m-p/3427301#M65668</link>
      <description>Is there a way using MONITOR to be able to differentiate between hard/soft page faults?&lt;BR /&gt;&lt;BR /&gt;Chaim</description>
      <pubDate>Mon, 22 Nov 2004 04:57:57 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/hard-vs-soft-page-faults/m-p/3427301#M65668</guid>
      <dc:creator>Chaim Budnick</dc:creator>
      <dc:date>2004-11-22T04:57:57Z</dc:date>
    </item>
    <item>
      <title>Re: Hard VS. Soft Page Faults</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/hard-vs-soft-page-faults/m-p/3427302#M65669</link>
      <description>Yes.&lt;BR /&gt;&lt;A href="http://h71000.www7.hp.com/doc/72final/6491/6491pro_006.html#index_x_227" target="_blank"&gt;http://h71000.www7.hp.com/doc/72final/6491/6491pro_006.html#index_x_227&lt;/A&gt;&lt;BR /&gt;&lt;BR /&gt;And when using "MONITOR SYSTEM", there is a vertical bar (|) that separates the hard and soft faults.</description>
      <pubDate>Mon, 22 Nov 2004 05:09:16 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/hard-vs-soft-page-faults/m-p/3427302#M65669</guid>
      <dc:creator>Uwe Zessin</dc:creator>
      <dc:date>2004-11-22T05:09:16Z</dc:date>
    </item>
    <item>
      <title>Re: Hard VS. Soft Page Faults</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/hard-vs-soft-page-faults/m-p/3427303#M65670</link>
      <description>Which side is hard and which is soft?&lt;BR /&gt;&lt;BR /&gt;Chaim</description>
      <pubDate>Mon, 22 Nov 2004 05:12:06 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/hard-vs-soft-page-faults/m-p/3427303#M65670</guid>
      <dc:creator>Chaim Budnick</dc:creator>
      <dc:date>2004-11-22T05:12:06Z</dc:date>
    </item>
    <item>
      <title>Re: Hard VS. Soft Page Faults</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/hard-vs-soft-page-faults/m-p/3427304#M65671</link>
      <description>&lt;A href="http://h71000.www7.hp.com/doc/732FINAL/6048/6048pro_004.html#index_x_1494" target="_blank"&gt;http://h71000.www7.hp.com/doc/732FINAL/6048/6048pro_004.html#index_x_1494&lt;/A&gt;&lt;BR /&gt;&lt;BR /&gt;""In the Page Fault segment, the page read I/O rate is indicated by a vertical bar. The bar provides a visual estimate of the proportion of the total page fault rate that caused read I/O operations (the hard fault rate). The hard fault rate appears to the left of the bar.""</description>
      <pubDate>Mon, 22 Nov 2004 05:20:08 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/hard-vs-soft-page-faults/m-p/3427304#M65671</guid>
      <dc:creator>Uwe Zessin</dc:creator>
      <dc:date>2004-11-22T05:20:08Z</dc:date>
    </item>
    <item>
      <title>Re: Hard VS. Soft Page Faults</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/hard-vs-soft-page-faults/m-p/3427305#M65672</link>
      <description>Hi,&lt;BR /&gt;&lt;BR /&gt;- on the monitor page screen the 2nd to 5 th line are hard faults&lt;BR /&gt; the 5 next lines are soft faults&lt;BR /&gt;(free list to Wrt in Progress)&lt;BR /&gt;&lt;BR /&gt;- on the monitor system screen &lt;BR /&gt;  under page fault Rate, a vertical bar separates hard faults (left) from soft faults.&lt;BR /&gt;&lt;BR /&gt;hth,&lt;BR /&gt;HF</description>
      <pubDate>Mon, 22 Nov 2004 05:28:57 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/hard-vs-soft-page-faults/m-p/3427305#M65672</guid>
      <dc:creator>faris_3</dc:creator>
      <dc:date>2004-11-22T05:28:57Z</dc:date>
    </item>
    <item>
      <title>Re: Hard VS. Soft Page Faults</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/hard-vs-soft-page-faults/m-p/3427306#M65673</link>
      <description>Thanks for all the replies !!&lt;BR /&gt;&lt;BR /&gt;There are a large number of page faults (between 300 - 1000), most of which (probably around 90%) are SOFT.&lt;BR /&gt;&lt;BR /&gt;Can this magnitude of soft PFs seriously affect performance?&lt;BR /&gt;&lt;BR /&gt;Chaim&lt;BR /&gt;&lt;BR /&gt;P.S. This is a DSM application</description>
      <pubDate>Mon, 22 Nov 2004 05:35:28 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/hard-vs-soft-page-faults/m-p/3427306#M65673</guid>
      <dc:creator>Chaim Budnick</dc:creator>
      <dc:date>2004-11-22T05:35:28Z</dc:date>
    </item>
    <item>
      <title>Re: Hard VS. Soft Page Faults</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/hard-vs-soft-page-faults/m-p/3427307#M65674</link>
      <description>It depends on the hardware whether it has enough horse power - 300-1000 does not sound much to me. Soft faults means just shuffling a few pointers around, so this is not has 'bad' has hard faulting. Remember that you will never be able to eliminate all faults - VMS uses the page fault handler to get images into memory - there is not separate program loader.</description>
      <pubDate>Mon, 22 Nov 2004 05:39:31 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/hard-vs-soft-page-faults/m-p/3427307#M65674</guid>
      <dc:creator>Uwe Zessin</dc:creator>
      <dc:date>2004-11-22T05:39:31Z</dc:date>
    </item>
    <item>
      <title>Re: Hard VS. Soft Page Faults</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/hard-vs-soft-page-faults/m-p/3427308#M65675</link>
      <description>Chaim,&lt;BR /&gt;&lt;BR /&gt;Try to find out which process is doing it (mon proc/topf). It might be that the working set is too small for what the process is doing or that lots of image activations are done. WS : increase quotas, image act. : try installing them.&lt;BR /&gt;&lt;BR /&gt;Database servers with bad WS quotas can pagefault like crazy.&lt;BR /&gt;&lt;BR /&gt;Wim&lt;BR /&gt;</description>
      <pubDate>Mon, 22 Nov 2004 07:01:26 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/hard-vs-soft-page-faults/m-p/3427308#M65675</guid>
      <dc:creator>Wim Van den Wyngaert</dc:creator>
      <dc:date>2004-11-22T07:01:26Z</dc:date>
    </item>
    <item>
      <title>Re: Hard VS. Soft Page Faults</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/hard-vs-soft-page-faults/m-p/3427309#M65676</link>
      <description>Chaim,&lt;BR /&gt;&lt;BR /&gt;If you need further help, then please attach some sample monitor output in a text files. Concreate numbers help discussing this.&lt;BR /&gt;&lt;BR /&gt;What Wim refers to is notably Global Valid Pagefaults. Those mean that a shared-library/share-buffer-page was in memory, but the process could not 'look' at it yet due to restricted wsquota (or because it had never tried to look at it so far). Increasing WS will not directly increase physical memory usage for this while it will reduce this soft pagefault overhead. (indirectly memory usage may creep up, if for example SORT sees the extra WS and uses it).&lt;BR /&gt;A similar reasoning applies to free/modified page list soft faults. Just increase WS!?&lt;BR /&gt;&lt;BR /&gt;If this is an Oracle DB application, you may want to consider RESERVEED MEMORY allowing the SGA to be mapped with huge pages and with no WS charge/usage.&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;Hope this helps some,&lt;BR /&gt;Hein.&lt;BR /&gt;</description>
      <pubDate>Mon, 22 Nov 2004 08:26:21 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/hard-vs-soft-page-faults/m-p/3427309#M65676</guid>
      <dc:creator>Hein van den Heuvel</dc:creator>
      <dc:date>2004-11-22T08:26:21Z</dc:date>
    </item>
    <item>
      <title>Re: Hard VS. Soft Page Faults</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/hard-vs-soft-page-faults/m-p/3427310#M65677</link>
      <description>If a program initiliazes (fills) a (large) array with values the wrong way, it can introduce a huge pagefaultrate as well - soft, mostly, but nevertheless choking the system for some time. "the wrong way" depends on the language. IIRC, a FORTRAN will store arrays column by column, so filling it is best done column-by-colunm; doing it row-by-row may introduce huge pagefault-rate, if the array is large enough, it may be 1 pagefualt for each cell)</description>
      <pubDate>Mon, 22 Nov 2004 09:14:03 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/hard-vs-soft-page-faults/m-p/3427310#M65677</guid>
      <dc:creator>Willem Grooters</dc:creator>
      <dc:date>2004-11-22T09:14:03Z</dc:date>
    </item>
    <item>
      <title>Re: Hard VS. Soft Page Faults</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/hard-vs-soft-page-faults/m-p/3427311#M65678</link>
      <description>Willem,&lt;BR /&gt;&lt;BR /&gt;just exactly YOUR example demonstrates where Working Set plays its game.&lt;BR /&gt;In the modern times of abundant cheap memory, whenever you have an application with your type of needs there is really only ONE good solution: give the process enough Working Set!&lt;BR /&gt;.. and if it is REAL big, mind SYSGEN WSMAX to be compliant.&lt;BR /&gt;And yes, during the initial fill of your array you will incur a lot of pagefaults.&lt;BR /&gt;They are DZRO faults (Demand ZeROed page), which are soft faults.&lt;BR /&gt;Any application that will still not fit probably warrants the old "Buy More Memory", more than it has ever been true in the past!&lt;BR /&gt;&lt;BR /&gt;HTH&lt;BR /&gt;&lt;BR /&gt;Cheers.&lt;BR /&gt;&lt;BR /&gt;Have one on me.&lt;BR /&gt;&lt;BR /&gt;Jan</description>
      <pubDate>Mon, 22 Nov 2004 10:06:34 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/hard-vs-soft-page-faults/m-p/3427311#M65678</guid>
      <dc:creator>Jan van den Ende</dc:creator>
      <dc:date>2004-11-22T10:06:34Z</dc:date>
    </item>
    <item>
      <title>Re: Hard VS. Soft Page Faults</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/hard-vs-soft-page-faults/m-p/3427312#M65679</link>
      <description>Jan,&lt;BR /&gt;&lt;BR /&gt;&amp;gt; .. and if it is REAL big, mind SYSGEN WSMAX to be compliant.&lt;BR /&gt;&lt;BR /&gt;may I remind you that WSMAX is usually MAXed on today's systems (since VMS V6.0) anyway...&lt;BR /&gt;&lt;BR /&gt;&lt;A href="http://forums1.itrc.hp.com/service/forums/questionanswer.do?threadId=658938" target="_blank"&gt;http://forums1.itrc.hp.com/service/forums/questionanswer.do?threadId=658938&lt;/A&gt;&lt;BR /&gt;starting:  Aug 8, 2004 04:43:14</description>
      <pubDate>Mon, 22 Nov 2004 10:47:27 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/hard-vs-soft-page-faults/m-p/3427312#M65679</guid>
      <dc:creator>Uwe Zessin</dc:creator>
      <dc:date>2004-11-22T10:47:27Z</dc:date>
    </item>
    <item>
      <title>Re: Hard VS. Soft Page Faults</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/hard-vs-soft-page-faults/m-p/3427313#M65680</link>
      <description>Uwe,&lt;BR /&gt;&lt;BR /&gt;that indeed is __USUALLY__, but NOT guaranteed...&lt;BR /&gt;eg, if AUTOGEN is NOT used to set params, or WSMAX calculation is overruled by MODPARAMS, (maybe because of an old, not removed, setting) &lt;BR /&gt;&lt;BR /&gt;Cheers.&lt;BR /&gt;&lt;BR /&gt;Have one on me.&lt;BR /&gt;&lt;BR /&gt;Jan&lt;BR /&gt;</description>
      <pubDate>Mon, 22 Nov 2004 12:28:13 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/hard-vs-soft-page-faults/m-p/3427313#M65680</guid>
      <dc:creator>Jan van den Ende</dc:creator>
      <dc:date>2004-11-22T12:28:13Z</dc:date>
    </item>
    <item>
      <title>Re: Hard VS. Soft Page Faults</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/hard-vs-soft-page-faults/m-p/3427314#M65681</link>
      <description>Chaim,&lt;BR /&gt;&lt;BR /&gt;  For detailed page fault stats, use MONITOR PAGE. As Homi pointed out, the ones of interest "soft faults" are the 4 lines after the first 5:&lt;BR /&gt;&lt;BR /&gt;Free List Fault Rate              &lt;BR /&gt;Modified List Fault Rate          &lt;BR /&gt;Demand Zero Fault Rate            &lt;BR /&gt;Global Valid Fault Rate &lt;BR /&gt;&lt;BR /&gt;  Generally speaking, if you're seeing significant Free List, Modified List or Global Valid faults, you can probably reduce (or eliminate) them by increasing working set sizes, WITHOUT any affect on real memory consumption. That's because they represent pages pointers being moved around in memory (the pages themselves don't move). Unless your numbers are very high, you're unlikely to see a huge difference in performance, because these faults are very "cheap" - they're really just flipping a few pointers (but then, avoiding the fault costs ZERO, so there is a benefit in tuning them away).&lt;BR /&gt;&lt;BR /&gt;  That leaves Demand Zero faults. A high DZRO fault date is usually an indication of a high image activation rate. If you can find the cause and fix it, that may give you a performance boost. For example, calling a program in a DCL loop, feeding it one file name at a time. You may be able to modify the program to accept a list of file names - in other words move the loop to inside the program.&lt;BR /&gt;</description>
      <pubDate>Mon, 22 Nov 2004 16:22:19 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/hard-vs-soft-page-faults/m-p/3427314#M65681</guid>
      <dc:creator>John Gillings</dc:creator>
      <dc:date>2004-11-22T16:22:19Z</dc:date>
    </item>
    <item>
      <title>Re: Hard VS. Soft Page Faults</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/hard-vs-soft-page-faults/m-p/3427315#M65682</link>
      <description>Homi, on MON PAGE, line 3, Page Read I/O Rate is the hard page faults.&lt;BR /&gt;Chaim,&lt;BR /&gt;1. If a process has a lot of page faults and the pages in the working set are more than WSDEF and WSQUOTA, the process will benefit from larger WSDEF and WSQUOTA. I use a modified version of DEC's WORKSET.COM (attached)to measure page faults before and after the change. I start increasing WSDEF and WSQUOTA's for the processes of this type that are page faulting the most. &lt;BR /&gt;NOTE: If the process has a working set smaller than WSQUOTA, increasing WSQUOTA won't help.&lt;BR /&gt;2. If you have a high demand zero fault rate, on Alphas, you can increase SYSGEN parameter MIN_ZERO_LIST_HI. "On AXP systems, ZERO_LIST_HI is the maximum number of pages zeroed and put on the zeroed page list. This list is used as a cache of pages containing all zeros, which improves the performance of allocating such pages." This can be helpful if you have a lot of system activations since the system can zero the pages before it needs them. I assume it does this when the processor would otherwise be idle.&lt;BR /&gt;NOTE: I've found that increasing MIN_ZERO_LIST_HI reduces demand zero soft faults.&lt;BR /&gt;Lawrence&lt;BR /&gt;</description>
      <pubDate>Mon, 22 Nov 2004 18:26:40 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/hard-vs-soft-page-faults/m-p/3427315#M65682</guid>
      <dc:creator>Lawrence Czlapinski</dc:creator>
      <dc:date>2004-11-22T18:26:40Z</dc:date>
    </item>
    <item>
      <title>Re: Hard VS. Soft Page Faults</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/hard-vs-soft-page-faults/m-p/3427316#M65683</link>
      <description>Lawrence,&lt;BR /&gt;&lt;BR /&gt;&amp;gt;NOTE: I've found that increasing MIN_ZERO_LIST_HI reduces demand zero soft faults.&lt;BR /&gt;&lt;BR /&gt; ?!? that sounds very unlikely. For a given workload, the number of DZRO faults should be invariant. Consider activating an image with an array in DZRO pages. The program then traverses the array and exits. The number faults may vary, but the number of DZRO faults must ALWAYS be the same, by definition. &lt;BR /&gt;&lt;BR /&gt;  What *might* change is the nett impact of the DZRO faults. Since you keep more pages on the zero list, there are likely to be less cases where a process needs to wait for a zero page to be created. That may result in the MON PAGE stats being reported differently. The only way to reduce demand zero faults is to not request them! They can't be "tuned" away.&lt;BR /&gt;&lt;BR /&gt;  The downside of increasing ZERO_LIST_HI is the pages being zeroed need to come from the free list. Once a page has been zeroed, it's no longer available to be faulted back in from the free list. If the page is requested again, the data needs to be read from somewhere, requiring I/O. If that's happening, the negative performance impact would be FAR higher than keeping ZERO_LIST_HI low.&lt;BR /&gt;&lt;BR /&gt;  I'd recommend increasing working sets to reduce or eliminate Free List and Modified Page List faults BEFORE adjusting ZERO_LIST_HI.</description>
      <pubDate>Mon, 22 Nov 2004 23:04:14 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/hard-vs-soft-page-faults/m-p/3427316#M65683</guid>
      <dc:creator>John Gillings</dc:creator>
      <dc:date>2004-11-22T23:04:14Z</dc:date>
    </item>
    <item>
      <title>Re: Hard VS. Soft Page Faults</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/hard-vs-soft-page-faults/m-p/3427317#M65684</link>
      <description>Lawrence,&lt;BR /&gt;&lt;BR /&gt;I think the &lt;BR /&gt;Page Read I/O Rate is the rate  of I/Os resulting from Hard Faults, because more than one page (a cluster) is read on a hard page fault occurrence.&lt;BR /&gt;&lt;BR /&gt;The ratio between page Read Rate and Page Read I/O Rate depends on how many pages &lt;BR /&gt;are read per I/O. This depends on the &lt;BR /&gt;SYSGEN parameter PFCDEFAULT and/or the page&lt;BR /&gt;fault cluster size of the image/file section. &lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Tue, 23 Nov 2004 03:52:19 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/hard-vs-soft-page-faults/m-p/3427317#M65684</guid>
      <dc:creator>faris_3</dc:creator>
      <dc:date>2004-11-23T03:52:19Z</dc:date>
    </item>
  </channel>
</rss>

