- Community Home
- >
- Servers and Operating Systems
- >
- Operating Systems
- >
- Operating System - OpenVMS
- >
- Excessive Hard Faulting
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Forums
Discussions
Discussions
Discussions
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
09-26-2005 09:14 AM
09-26-2005 09:14 AM
Excessive Hard Faulting
My Alpha Server have excessive hard faulting probably caused by too small page cache.
The last time I increased the secondary page cache by increasing the values of MPW_HILIMIT, MPW_TRESH and MPW_WAITLIMIT.
See attach
But I continue with the same problem... excessive hard faulting
A rough guideline is to provide between 4 and 12 percent of memory usable by processes in the page
cache, the smaller being for large memory configurations.
How could I obtain that value or the best value according to my system ( Alpha Server 8400 with OpenVMS 7.3-2, 6 cpu's and 12GB RAM ?
Thanks
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
09-26-2005 09:26 AM
09-26-2005 09:26 AM
Re: Excessive Hard Faulting
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
09-26-2005 10:01 AM
09-26-2005 10:01 AM
Re: Excessive Hard Faulting
The autogen was the method that I use.
The autogen adjusted the values and I in addition increased these in a 25 %.
But I continued with the same problem.
What I want to know is how I could calculate the better value for these parameters.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
09-26-2005 10:33 AM
09-26-2005 10:33 AM
Re: Excessive Hard Faulting
@sys$examples:working_set
for one method of monitoring this.
Best value for your system depends on what the users or application is doing.
Andy
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
09-26-2005 03:46 PM
09-26-2005 03:46 PM
Re: Excessive Hard Faulting
Excessive hard page faults suggest either a severe physical memory shortage, or excessive image activation without the benefit of (installed) shared images.
If one didles with the mpw values then more, or less, pages get flushed out end end up on the free list... from where they will softfault back in. Not hard.
Ofcourse it could also be application design. For example, if one maps a 10GB file on a 4Gb system and then walks that file, clearly hard page fault will happen as requested.
btw... I failed to see the attachment you mentioned. Try that again?
hth,
Hein.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
09-26-2005 09:01 PM
09-26-2005 09:01 PM
Re: Excessive Hard Faulting
Are they starting big exe's every second ? Are the exe's installed ?
Can't the process stay in the exe ?
Do show mem/cac=file=dev:*>* and check if the exe's are well cached.
Wim
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
09-26-2005 09:11 PM
09-26-2005 09:11 PM
Re: Excessive Hard Faulting
But it can also be a matter of program design or coding. If you're using Java programs, you need LOTS of memory available (for each user) so that might eventually casue heavy hard paging. HP recommends Unix settings (on a VMS system!) in http://h71000.www7.hp.com/ebusiness/optimizingsdkguide/optimizingsdkguide.html
OpenVMS Developer & System Manager
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
09-28-2005 05:53 AM
09-28-2005 05:53 AM
Re: Excessive Hard Faulting
For example, if you are using XFC and Oracle, set the Oracle Databases to /nocache.
Are you sharing as many images as possible?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
09-28-2005 06:22 AM
09-28-2005 06:22 AM
Re: Excessive Hard Faulting
comarow
We used ACMS applications and Rdb database
Thanks
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
09-28-2005 12:45 PM
09-28-2005 12:45 PM
Re: Excessive Hard Faulting
Hi Arthuro,
That is a very specific environment. I would of course welcome suggestions from readers here, but expect you will need dedicated support.
It is a fun environment, and potentially a very well performing one, as a small number of (server) images can process many user requests. In fact I woudl expect less pagefault issues in that environment then 'normal' ones.
Those hard faults are (by definition) going to a an file on disk. Your most critical missions is to find out which file(s) they are going to. You'll need some 'hot file' monitorring tool, an io trace or something like that. A first drill down could be MONI CLUS to spot the hot disk(s) and SHOW DEV /FILE for those disks.
hope this helps a little,
Hein.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
10-02-2005 05:29 AM
10-02-2005 05:29 AM
Re: Excessive Hard Faulting
RDB can do row caching. Remember to set the RDB files/nocache if you use XFC.
In general, it is obvious, to reduce hard caching add memory. Working sizes grow larger, caches grow larger, modified pages will be flushed less.
It will insure hard faults are reduced.
Shared images will reduce hard faults. One way to help identify images that should be shared is show dev/files and see files open by multiple users. If they are not installed shared each will get their own copy.
When you do monitor page, where are most of your faults coming from?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
10-03-2005 06:29 AM
10-03-2005 06:29 AM
Re: Excessive Hard Faulting
Lawrence
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
10-05-2005 01:55 AM
10-05-2005 01:55 AM
Re: Excessive Hard Faulting
Hi Lawrence
Attach is the run log output from the worksets script
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
10-05-2005 04:42 AM
10-05-2005 04:42 AM
Re: Excessive Hard Faulting
The SQLSRV has 4 RMUEXEC71 started with 128000 pages WSsize, do you really need 4 of them prestarted (MC SQLSRV_MANAGE71 to remove them)?
regards Kalle
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
10-05-2005 05:59 AM
10-05-2005 05:59 AM
Re: Excessive Hard Faulting
1. Measure and save your page faulting rates before and after. Save your WORKSET.COM. You can check whether a user's processes have less page faults with the new values. This is a crude measurement. This won't mean much for a user with many image activations but some users may use a single application continuously or you might have application processes.
$MONITOR PAGE
I would monitor for awhile and cut and paste several final screens into a word processing or mail application so that you have some idea of what your current page faulting is.
2. Whether your processes stay around or come and go makes a big difference in page faults. When processes start you will get a lot of page faults.
3. $MONITOR PROC/TOPFAULT may or may not be helpful. It would tell you which processes are currently doing the most page faulting.
4. For processes where the page faults are high and Pages in Working Set are less than WSQUOTA, consider increasing the WSDEF and WSQUOTAs where possible. This increases the chances that shared global pages are already in memory. This will only help with hard page faults due to too small working sets. This works best for applications that stay around and run the same image continously.
For interactive processes, you would modify the UAF (authorization file). For batch processes, you would need to check whether your batch queues have limits. For detached processes, you need to look at PQL_DWSDEF, PQL_MWSDEF, PQL_DWSQUO and PQL_MWSQUO and whether WSDEF and WSQUOTA are hard coded.
5. If there are applications, that are used by multiple users you want to have them installed shared so that it is more likely that the pages are already in memory.
6. Some images get a lot of page faults that are unrelated to their working set size. You probably won't be able to do anything about them. DCL procedures often do a lot of paging because of image activations. Compiled programs are often more efficient. If you have Java applications, they often do a lot of paging. Dec Windows does a lot of page faulting too.
7. Having high WSDEF and WSQUOTA doesn't usally hurt unless you are tight on memory. Still you will have to use your judgement as too how much and how quickly you change things. I'll typically look at how much memory the process is getting now as a consideration. Also consider the processes priority. If it's a high priority process, by all means be generous, you want it to have the resources it needs and not be paging a lot. Some of your processes have pages in working sets are quite large and could benefit from increased sizes for WSDEF and WSQUOTA.
8. It may take awhile for enough processes that could benefit from increased quotas to be running with the new quotas.
Lawrence
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
10-05-2005 09:36 PM
10-05-2005 09:36 PM
Re: Excessive Hard Faulting
For the processes that are doing the faulting then do they run many images or just one?
Purely Personal Opinion
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
10-06-2005 03:09 AM
10-06-2005 03:09 AM
Re: Excessive Hard Faulting
Ian
Yes, is in PSDC where I saw the report of excessive hard faulting... and also in Availability Manager