<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: Excessive Hard Faulting in Operating System - OpenVMS</title>
    <link>https://community.hpe.com/t5/operating-system-openvms/excessive-hard-faulting/m-p/3635058#M71589</link>
    <description>Autogen?</description>
    <pubDate>Mon, 26 Sep 2005 16:26:20 GMT</pubDate>
    <dc:creator>Peter Quodling</dc:creator>
    <dc:date>2005-09-26T16:26:20Z</dc:date>
    <item>
      <title>Excessive Hard Faulting</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/excessive-hard-faulting/m-p/3635057#M71588</link>
      <description>Hello&lt;BR /&gt;&lt;BR /&gt; My Alpha Server have excessive hard faulting probably caused by too small  page cache. &lt;BR /&gt; The last time I increased the secondary page cache by  increasing the values of MPW_HILIMIT, MPW_TRESH and   MPW_WAITLIMIT.&lt;BR /&gt;&lt;BR /&gt;See attach &lt;BR /&gt;&lt;BR /&gt; But I continue with the same problem... excessive hard faulting&lt;BR /&gt;&lt;BR /&gt;A rough guideline  is  to  provide  between  4  and  12 percent  of  memory  usable  by  processes  in the page &lt;BR /&gt;cache,   the   smaller   being   for    large    memory configurations. &lt;BR /&gt;How could I obtain that value or the best value according to my system ( Alpha Server 8400 with OpenVMS 7.3-2, 6 cpu's and 12GB RAM ?&lt;BR /&gt;&lt;BR /&gt;    Thanks&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Mon, 26 Sep 2005 16:14:02 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/excessive-hard-faulting/m-p/3635057#M71588</guid>
      <dc:creator>sartur</dc:creator>
      <dc:date>2005-09-26T16:14:02Z</dc:date>
    </item>
    <item>
      <title>Re: Excessive Hard Faulting</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/excessive-hard-faulting/m-p/3635058#M71589</link>
      <description>Autogen?</description>
      <pubDate>Mon, 26 Sep 2005 16:26:20 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/excessive-hard-faulting/m-p/3635058#M71589</guid>
      <dc:creator>Peter Quodling</dc:creator>
      <dc:date>2005-09-26T16:26:20Z</dc:date>
    </item>
    <item>
      <title>Re: Excessive Hard Faulting</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/excessive-hard-faulting/m-p/3635059#M71590</link>
      <description>Peter&lt;BR /&gt;&lt;BR /&gt;The autogen was the method that I use. &lt;BR /&gt;The autogen adjusted the values and I in addition increased these in a 25 %. &lt;BR /&gt;But I continued with the same problem. &lt;BR /&gt;What I want to know is how I could calculate the better value for these parameters.</description>
      <pubDate>Mon, 26 Sep 2005 17:01:18 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/excessive-hard-faulting/m-p/3635059#M71590</guid>
      <dc:creator>sartur</dc:creator>
      <dc:date>2005-09-26T17:01:18Z</dc:date>
    </item>
    <item>
      <title>Re: Excessive Hard Faulting</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/excessive-hard-faulting/m-p/3635060#M71591</link>
      <description>How much free memory do you have during these hard faults?  How much memory is allocated to XFC? If you don't have a memory shortfall then hard page faults could indicate your process working sets are too small.  See&lt;BR /&gt;&lt;BR /&gt; @sys$examples:working_set&lt;BR /&gt;&lt;BR /&gt;for one method of monitoring this. &lt;BR /&gt;&lt;BR /&gt;Best value for your system depends on what the users or application is doing.  &lt;BR /&gt;&lt;BR /&gt;Andy   &lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Mon, 26 Sep 2005 17:33:31 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/excessive-hard-faulting/m-p/3635060#M71591</guid>
      <dc:creator>Andy Bustamante</dc:creator>
      <dc:date>2005-09-26T17:33:31Z</dc:date>
    </item>
    <item>
      <title>Re: Excessive Hard Faulting</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/excessive-hard-faulting/m-p/3635061#M71592</link>
      <description>&lt;BR /&gt;Excessive hard page faults suggest either a severe physical memory shortage, or excessive image activation without the benefit of (installed) shared images.&lt;BR /&gt;&lt;BR /&gt;If one didles with the mpw values then more, or less, pages get flushed out end end up on the free list... from where they will softfault back in. Not hard.&lt;BR /&gt;&lt;BR /&gt;Ofcourse it could also be application design. For example, if one maps a 10GB file on a 4Gb system and then walks that file, clearly hard page fault will happen as requested.&lt;BR /&gt;&lt;BR /&gt;btw... I failed to see the attachment you mentioned. Try that again?&lt;BR /&gt;&lt;BR /&gt;hth,&lt;BR /&gt;&lt;BR /&gt;Hein.&lt;BR /&gt;</description>
      <pubDate>Mon, 26 Sep 2005 22:46:13 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/excessive-hard-faulting/m-p/3635061#M71592</guid>
      <dc:creator>Hein van den Heuvel</dc:creator>
      <dc:date>2005-09-26T22:46:13Z</dc:date>
    </item>
    <item>
      <title>Re: Excessive Hard Faulting</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/excessive-hard-faulting/m-p/3635062#M71593</link>
      <description>I would first search which processes are doing hard faults and check why they are doing it. E.g. &lt;BR /&gt;Are they starting big exe's every second ? Are the exe's installed ?&lt;BR /&gt;Can't the process stay in the exe ?&lt;BR /&gt;Do show mem/cac=file=dev:*&amp;gt;* and check if the exe's are well cached.&lt;BR /&gt;&lt;BR /&gt;Wim</description>
      <pubDate>Tue, 27 Sep 2005 04:01:47 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/excessive-hard-faulting/m-p/3635062#M71593</guid>
      <dc:creator>Wim Van den Wyngaert</dc:creator>
      <dc:date>2005-09-27T04:01:47Z</dc:date>
    </item>
    <item>
      <title>Re: Excessive Hard Faulting</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/excessive-hard-faulting/m-p/3635063#M71594</link>
      <description>It can of course be a matter of data size, program activation, process creation, file mapping...&lt;BR /&gt;But it can also be a matter of program design or coding. If you're using Java programs, you need LOTS of memory available (for each user) so that might eventually casue heavy hard paging. HP recommends Unix settings (on a VMS system!) in &lt;A href="http://h71000.www7.hp.com/ebusiness/optimizingsdkguide/optimizingsdkguide.html" target="_blank"&gt;http://h71000.www7.hp.com/ebusiness/optimizingsdkguide/optimizingsdkguide.html&lt;/A&gt;</description>
      <pubDate>Tue, 27 Sep 2005 04:11:47 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/excessive-hard-faulting/m-p/3635063#M71594</guid>
      <dc:creator>Willem Grooters</dc:creator>
      <dc:date>2005-09-27T04:11:47Z</dc:date>
    </item>
    <item>
      <title>Re: Excessive Hard Faulting</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/excessive-hard-faulting/m-p/3635064#M71595</link>
      <description>It would be nice to know what kind of applications you are using.&lt;BR /&gt;&lt;BR /&gt;For example, if you are using XFC and Oracle, set the Oracle Databases to /nocache.&lt;BR /&gt;&lt;BR /&gt;Are you sharing as many images as possible?</description>
      <pubDate>Wed, 28 Sep 2005 12:53:09 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/excessive-hard-faulting/m-p/3635064#M71595</guid>
      <dc:creator>comarow</dc:creator>
      <dc:date>2005-09-28T12:53:09Z</dc:date>
    </item>
    <item>
      <title>Re: Excessive Hard Faulting</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/excessive-hard-faulting/m-p/3635065#M71596</link>
      <description>&lt;BR /&gt;  comarow&lt;BR /&gt;&lt;BR /&gt;We used ACMS applications and Rdb database&lt;BR /&gt;&lt;BR /&gt;   Thanks</description>
      <pubDate>Wed, 28 Sep 2005 13:22:04 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/excessive-hard-faulting/m-p/3635065#M71596</guid>
      <dc:creator>sartur</dc:creator>
      <dc:date>2005-09-28T13:22:04Z</dc:date>
    </item>
    <item>
      <title>Re: Excessive Hard Faulting</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/excessive-hard-faulting/m-p/3635066#M71597</link>
      <description>&lt;BR /&gt;Hi Arthuro,&lt;BR /&gt;&lt;BR /&gt;That is a very specific environment. I would of course welcome suggestions from readers here, but expect you will need dedicated support.&lt;BR /&gt;&lt;BR /&gt;It is a fun environment, and potentially a very well performing one, as a small number of (server) images can process many user requests. In fact I woudl expect less pagefault issues in that environment then 'normal' ones.&lt;BR /&gt;&lt;BR /&gt;Those hard faults are (by definition) going to a an file on disk. Your most critical missions is to find out which file(s) they are going to. You'll need some 'hot file' monitorring tool, an io trace or something like that. A first drill down could be MONI CLUS to spot the hot disk(s) and SHOW DEV /FILE for those disks.&lt;BR /&gt;&lt;BR /&gt;hope this helps a little,&lt;BR /&gt;Hein.&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Wed, 28 Sep 2005 19:45:28 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/excessive-hard-faulting/m-p/3635066#M71597</guid>
      <dc:creator>Hein van den Heuvel</dc:creator>
      <dc:date>2005-09-28T19:45:28Z</dc:date>
    </item>
    <item>
      <title>Re: Excessive Hard Faulting</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/excessive-hard-faulting/m-p/3635067#M71598</link>
      <description>Thanks for the input.&lt;BR /&gt;&lt;BR /&gt;RDB can do row caching.  Remember to set the RDB files/nocache if you use XFC.&lt;BR /&gt;&lt;BR /&gt;In general, it is obvious, to reduce hard caching add memory.  Working sizes grow larger, caches grow larger, modified pages will be flushed less.&lt;BR /&gt;&lt;BR /&gt;It will insure hard faults are reduced.&lt;BR /&gt;&lt;BR /&gt;Shared images will reduce hard faults. One way to help identify images that should be shared is show dev/files and see files open by multiple users. If they are not installed shared each will get their own copy.&lt;BR /&gt;&lt;BR /&gt;When you do monitor page, where are most of your faults coming from?</description>
      <pubDate>Sun, 02 Oct 2005 12:29:55 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/excessive-hard-faulting/m-p/3635067#M71598</guid>
      <dc:creator>comarow</dc:creator>
      <dc:date>2005-10-02T12:29:55Z</dc:date>
    </item>
    <item>
      <title>Re: Excessive Hard Faulting</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/excessive-hard-faulting/m-p/3635068#M71599</link>
      <description>Arturo S.: You need to look at which processes are pagefaulting a lot figure out why they are pagefaulting. If you have images that are used by a number of users, it may help to install the images/share. Image activiations cause a lot of hard page faults. As others have stated, your working set defaults and working set quotas may be too small for some processes. Attached is workset.txt which can be renamed to workset.com on the Alpha. It will show the total page faults by process. If you could run it and attach the output, we may be able to give more specific advice.&lt;BR /&gt;Lawrence</description>
      <pubDate>Mon, 03 Oct 2005 13:29:34 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/excessive-hard-faulting/m-p/3635068#M71599</guid>
      <dc:creator>Lawrence Czlapinski</dc:creator>
      <dc:date>2005-10-03T13:29:34Z</dc:date>
    </item>
    <item>
      <title>Re: Excessive Hard Faulting</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/excessive-hard-faulting/m-p/3635069#M71600</link>
      <description>&lt;BR /&gt;  Hi Lawrence&lt;BR /&gt;&lt;BR /&gt;    Attach is the run log output from the worksets script&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Wed, 05 Oct 2005 08:55:06 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/excessive-hard-faulting/m-p/3635069#M71600</guid>
      <dc:creator>sartur</dc:creator>
      <dc:date>2005-10-05T08:55:06Z</dc:date>
    </item>
    <item>
      <title>Re: Excessive Hard Faulting</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/excessive-hard-faulting/m-p/3635070#M71601</link>
      <description>At 1st glance the WSDEF of 26000 pages seems very high.&lt;BR /&gt;&lt;BR /&gt;The SQLSRV has 4 RMUEXEC71 started with 128000 pages WSsize, do you really need 4 of them prestarted (MC SQLSRV_MANAGE71 to remove them)?&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;regards Kalle</description>
      <pubDate>Wed, 05 Oct 2005 11:42:17 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/excessive-hard-faulting/m-p/3635070#M71601</guid>
      <dc:creator>Karl Rohwedder</dc:creator>
      <dc:date>2005-10-05T11:42:17Z</dc:date>
    </item>
    <item>
      <title>Re: Excessive Hard Faulting</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/excessive-hard-faulting/m-p/3635071#M71602</link>
      <description>Arturo S:&lt;BR /&gt;1. Measure and save your page faulting rates before and after. Save your WORKSET.COM. You can check whether a user's processes have less page faults with the new values. This is a crude measurement. This won't mean much for a user with many image activations but some users may use a single application continuously or you might have application processes.&lt;BR /&gt;$MONITOR PAGE&lt;BR /&gt; I would monitor for awhile and cut and paste several final screens into a word processing or mail application so that you have some idea of what your current page faulting is.&lt;BR /&gt;2. Whether your processes stay around or come and go makes a big difference in page faults. When processes start you will get a lot of page faults.&lt;BR /&gt;3. $MONITOR PROC/TOPFAULT may or may not be helpful. It would tell you which processes are currently doing the most page faulting.&lt;BR /&gt;4. For processes where the page faults are high and Pages in Working Set are less than WSQUOTA, consider increasing the WSDEF and WSQUOTAs where possible. This increases the chances that shared global pages are already in memory. This will only help with hard page faults due to too small working sets. This works best for applications that stay around and run the same image continously.&lt;BR /&gt;For interactive processes, you would modify the UAF (authorization file). For batch processes, you would need to check whether your batch queues have limits. For detached processes, you need to look at PQL_DWSDEF, PQL_MWSDEF, PQL_DWSQUO and PQL_MWSQUO and whether WSDEF and WSQUOTA are hard coded. &lt;BR /&gt;5. If there are applications, that are used by multiple users you want to have them installed shared so that it is more likely that the pages are already in memory.&lt;BR /&gt;6. Some images get a lot of page faults that are unrelated to their working set size. You probably won't be able to do anything about them. DCL procedures often do a lot of paging because of image activations. Compiled programs are often more efficient. If you have Java applications, they often do a lot of paging. Dec Windows does a lot of page faulting too.&lt;BR /&gt;7. Having high WSDEF and WSQUOTA doesn't usally hurt unless you are tight on memory. Still you will have to use your judgement as too how much and how quickly you change things. I'll typically look at how much memory the process is getting now as a consideration. Also consider the processes priority. If it's a high priority process, by all means be generous, you want it to have the resources it needs and not be paging a lot. Some of your processes have pages in working sets are quite large and could benefit from increased sizes for WSDEF and WSQUOTA.&lt;BR /&gt;8. It may take awhile for enough processes that could benefit from increased quotas to be running with the new quotas.  &lt;BR /&gt;Lawrence</description>
      <pubDate>Wed, 05 Oct 2005 12:59:31 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/excessive-hard-faulting/m-p/3635071#M71602</guid>
      <dc:creator>Lawrence Czlapinski</dc:creator>
      <dc:date>2005-10-05T12:59:31Z</dc:date>
    </item>
    <item>
      <title>Re: Excessive Hard Faulting</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/excessive-hard-faulting/m-p/3635072#M71603</link>
      <description>I see you are running PSDC - Is that where you saw the report of excessive hard faulting?&lt;BR /&gt;&lt;BR /&gt;For the processes that are doing the faulting then do they run many images or just one?</description>
      <pubDate>Thu, 06 Oct 2005 04:36:42 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/excessive-hard-faulting/m-p/3635072#M71603</guid>
      <dc:creator>Ian Miller.</dc:creator>
      <dc:date>2005-10-06T04:36:42Z</dc:date>
    </item>
    <item>
      <title>Re: Excessive Hard Faulting</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/excessive-hard-faulting/m-p/3635073#M71604</link>
      <description>&lt;BR /&gt; Ian&lt;BR /&gt;&lt;BR /&gt;   Yes, is in PSDC where I saw the report of excessive hard faulting... and also in Availability Manager</description>
      <pubDate>Thu, 06 Oct 2005 10:09:36 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/excessive-hard-faulting/m-p/3635073#M71604</guid>
      <dc:creator>sartur</dc:creator>
      <dc:date>2005-10-06T10:09:36Z</dc:date>
    </item>
  </channel>
</rss>

