<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: heavy paging in Operating System - OpenVMS</title>
    <link>https://community.hpe.com/t5/operating-system-openvms/heavy-paging/m-p/3985053#M83285</link>
    <description>&amp;gt;&amp;gt; o would putting up decram (license cost?) and put a pagefile&lt;BR /&gt;on the ram disk work? (we have 2gig of memory.)&lt;BR /&gt;&lt;BR /&gt;That remark might just qualify for an WTF entry.&lt;BR /&gt;Yeah, WTF suffegst 'What The F*&amp;amp;^', but is spelled as:&lt;BR /&gt;&lt;A href="http://worsethanfailure.com/Default.aspx" target="_blank"&gt;http://worsethanfailure.com/Default.aspx&lt;/A&gt;&lt;BR /&gt;&lt;BR /&gt;If you have the memory, just allow the process to use it directly! (WS quotas)&lt;BR /&gt;&lt;BR /&gt;Anyway, I understand that through running out of pgflquota you learned that paging might be an issue, but it does NOT appear to be the defining number for the performance here.&lt;BR /&gt;&lt;BR /&gt;The accountng data, which unfortunately did not contain the also elapsed/cpu time,  strongly suggests that DIRECT IO defines the performance. 16 Million IO in 33 hours is about 136 IO/sec.... when evenly spread.&lt;BR /&gt;If those are truly random, with single stream (1 batch job) driver) then that's about all you will ge no matter how many disks there are behind it. This is only the case if they are truly random, new READ IOs, meaning no IO cache has the data and none of the many disks is any closer to the target then any other.&lt;BR /&gt;With you I expect your storage system to perform better, but this may be all there is worst case.&lt;BR /&gt;&lt;BR /&gt;So now back the test system.&lt;BR /&gt;What do the accounting numbers look like there?&lt;BR /&gt;&lt;BR /&gt;Does it have EXACTLY the same database?&lt;BR /&gt;If it is 'very close' does it have exactly the same indexes defined.&lt;BR /&gt;Do both systems have much the same buffer pool defined for RDB?&lt;BR /&gt;&lt;BR /&gt;If you want to make a real impact on the performance of this job then yo probably need to look at RDB settings, and query tunings, not at OpenVMS tweaks.&lt;BR /&gt;&lt;BR /&gt;Hope this helps some,&lt;BR /&gt;Hein van den Heuvel (at gmail dot com)&lt;BR /&gt;HvdH Performance Consulting&lt;BR /&gt;</description>
    <pubDate>Thu, 19 Apr 2007 13:33:52 GMT</pubDate>
    <dc:creator>Hein van den Heuvel</dc:creator>
    <dc:date>2007-04-19T13:33:52Z</dc:date>
    <item>
      <title>heavy paging</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/heavy-paging/m-p/3985043#M83275</link>
      <description>We have a remote production machine with large rdb job that&lt;BR /&gt;runs weekends that ran out of pgflquota. I upped the&lt;BR /&gt;pgflquota and wired in a secondary pagefile and we now&lt;BR /&gt;complete. The problem is the job runs 33 hours and we&lt;BR /&gt;need to shorten in. We have a test system here that I have&lt;BR /&gt;pagefiles on stripesets that took 9+ hours off the job.&lt;BR /&gt;&lt;BR /&gt;o If one adds more and more pagefiles, does the paging IO for a&lt;BR /&gt;  *single* process get distributed across them?&lt;BR /&gt;&lt;BR /&gt;o would putting up decram (license cost?) and put a pagefile&lt;BR /&gt;  on the ram disk work? (we have 2gig of memory.)&lt;BR /&gt;&lt;BR /&gt;o Any other ideas on how to reduce the run time?&lt;BR /&gt;&lt;BR /&gt;tx for any help Dean - data below&lt;BR /&gt;&lt;BR /&gt;account quotas..&lt;BR /&gt;&lt;BR /&gt;Maxjobs:         0  Fillm:       350  Bytlm:       300000&lt;BR /&gt;Maxacctjobs:     0  Shrfillm:      0  Pbytlm:           0&lt;BR /&gt;Maxdetach:       0  BIOlm:      1000  JTquota:      40960&lt;BR /&gt;Prclm:          40  DIOlm:      1000  WSdef:        16384&lt;BR /&gt;Prio:            4  ASTlm:      1000  WSquo:        32767&lt;BR /&gt;Queprio:         4  TQElm:      1000  WSextent:     64000&lt;BR /&gt;CPU:        (none)  Enqlm:     18000  Pgflquo:     990000&lt;BR /&gt;&lt;BR /&gt;system pagefiles...&lt;BR /&gt;&lt;BR /&gt;  DISK$ALPHA_V72_1:[SYS10.SYSEXE]PAGEFILE.SYS&lt;BR /&gt;                                             1056768      543712     1056768&lt;BR /&gt;  DISK$DEAN_TESTPG:[ENGDS2_PAGE]PAGEFILE2.SYS;1&lt;BR /&gt;                                             1056640      986144     1056640&lt;BR /&gt;some sysgen params...&lt;BR /&gt;&lt;BR /&gt;WSMAX                      786432       4096      1024    8388608 Pagelets&lt;BR /&gt; internal value             49152        256        64     524288 Pages&lt;BR /&gt;NPAGEDYN                  9494528    1048576    163840         -1 Bytes&lt;BR /&gt;PAGEDYN                   4980736     524288     65536         -1 Bytes&lt;BR /&gt;QUANTUM                        20         20         2      32767 10Ms       D&lt;BR /&gt;FILE_CACHE                      0          0         0        100 Percent&lt;BR /&gt;S2_SIZE                         0          0         0         -1 MBytes&lt;BR /&gt;PFRATL                          0          0         0         -1 Flts/10Sec D&lt;BR /&gt;PFRATH                          8          8         0         -1 Flts/10Sec D&lt;BR /&gt;WSINC                        2400       2400         0         -1 Pagelets   D&lt;BR /&gt; internal value               150        150         0         -1 Pages      D&lt;BR /&gt;WSDEC                        4000       4000         0         -1 Pagelets   D&lt;BR /&gt; internal value               250        250         0         -1 Pages      D&lt;BR /&gt;FREELIM                       473         32        16         -1 Pages&lt;BR /&gt;FREEGOAL                     1812        200        16         -1 Pages      D&lt;BR /&gt;&lt;BR /&gt;from accounting of job quotas...&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;Page faults:            36664        Direct IO:           16237388&lt;BR /&gt;Page fault reads:         561        Buffered IO:             2053&lt;BR /&gt;Peak working set:      535072        Volumes mounted:            0&lt;BR /&gt;Peak page file:        728176        Images executed:           31&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Thu, 19 Apr 2007 12:28:23 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/heavy-paging/m-p/3985043#M83275</guid>
      <dc:creator>Dean McGorrill</dc:creator>
      <dc:date>2007-04-19T12:28:23Z</dc:date>
    </item>
    <item>
      <title>Re: heavy paging</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/heavy-paging/m-p/3985044#M83276</link>
      <description>Raise the working sets, the working set default and quota.  Use sys$examples:working_set.com or Availability Manager to monitor these.  &lt;BR /&gt;&lt;BR /&gt;The other question is what versions of VMS and hardware are you using?  Memory and disk configuration?  Other processing?  I'd say account quotas are set very low for an Alphaserver, you probably have pql_ overriding these.  What else is happening on the system?  Compare the two systems.  &lt;BR /&gt;&lt;BR /&gt;Performance review really needs many details.  &lt;BR /&gt;&lt;BR /&gt;Andy&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Thu, 19 Apr 2007 12:42:56 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/heavy-paging/m-p/3985044#M83276</guid>
      <dc:creator>Andy Bustamante</dc:creator>
      <dc:date>2007-04-19T12:42:56Z</dc:date>
    </item>
    <item>
      <title>Re: heavy paging</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/heavy-paging/m-p/3985045#M83277</link>
      <description>You report heavy paging but show only 36k page faults from accounting - lots of DIOs though. I'd look at trying to speed up the IOs either with caching (don't know what's available with RDB) or by locating the data on faster disks if possible. You may be able to eliminate most of the page faults by increasing the process WSEXTENT to some value close to the 535072 page peak working set (presuming that you've got a big chunk of that 2GB available).</description>
      <pubDate>Thu, 19 Apr 2007 13:08:16 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/heavy-paging/m-p/3985045#M83277</guid>
      <dc:creator>Jim_McKinney</dc:creator>
      <dc:date>2007-04-19T13:08:16Z</dc:date>
    </item>
    <item>
      <title>Re: heavy paging</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/heavy-paging/m-p/3985046#M83278</link>
      <description>Back up a kilometer or three here, and systematically evaluate the system and system activities based on the guidelines in the performance management manual; with the tuning information in the documentation set.&lt;BR /&gt;&lt;BR /&gt;Look at the processor performance (we don't know what box and what version), look at the available physical memory usage, look at I/O caching and I/O rates, at details like when the last AUTOGEN was run such, and then and only then start to look at specific behaviors such as page faulting.  &lt;BR /&gt;&lt;BR /&gt;Get a baseline configuration and baseline performance. You'll need this information, and you'll need it system-wide.  MONITOR recording or the T4 tools can be useful in gaining and maintaining this baseline.&lt;BR /&gt;&lt;BR /&gt;Once you have a baseline and once AUTOGEN has been turned loose -- sans constraints left in MODPARAMS.DAT -- then start to look at parameter adjustments.   Then you can use the baseline to determine if your changes have had the desired effect.&lt;BR /&gt;&lt;BR /&gt;If this is the working set, then you can look at what is needed to either increase the available working set, or decrease competing requirements, or adding more memory, or replacing the box.&lt;BR /&gt;&lt;BR /&gt;If this is I/O-bound, then there can be faster I/O widgets, faster I/O paths, or DECram or such.  (But realize that memory limitations can hammer your I/O rates due to overly-constrained cache sizes -- this stuff is all interconnected.)&lt;BR /&gt;&lt;BR /&gt;My traditional guesstimate is that hand-tuning will generally get you about a 10% improvement beyond what AUTOGEN and baseline sane process quota settings and baseline identifying and removing any massive bottlenecks will get you, and that tuning effort is often a waste of time.    More than a couple of days of time tuning (in aggregate) usually means it's time to upgrade, or to replace.&lt;BR /&gt;&lt;BR /&gt;Application tuning may or may not pay off.  It really depends on what the application is doing, and the effort involved in addressing latent design-related bottlenecks.  (And these can be subtle, too, such as contention among the various FC SAN clients or out at the EVA or MSA storage controller.)&lt;BR /&gt;&lt;BR /&gt;Tuning, BTW, has little to do with lists of numbers.  It's all rates and relationships and trends and balances; about looking at the whole, and working toward identifying the limits.  One set of numbers tells little.&lt;BR /&gt;&lt;BR /&gt;Throwing hardware at the problem is usually (always?) cheaper than extended tuning efforts, in my experience.  Barring gross configuration errors and once AUTOGEN has been brought to bear with feedback over time, system tuning tends to be a delaying tactic. At best.&lt;BR /&gt;&lt;BR /&gt;And as for hardware upgrades, my laptop has 2GB.&lt;BR /&gt;&lt;BR /&gt;Stephen Hoffman&lt;BR /&gt;HoffmanLabs LLC&lt;BR /&gt;</description>
      <pubDate>Thu, 19 Apr 2007 13:09:12 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/heavy-paging/m-p/3985046#M83278</guid>
      <dc:creator>Hoff</dc:creator>
      <dc:date>2007-04-19T13:09:12Z</dc:date>
    </item>
    <item>
      <title>Re: heavy paging</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/heavy-paging/m-p/3985047#M83279</link>
      <description>Dean,&lt;BR /&gt;&lt;BR /&gt;First of all: &lt;BR /&gt;Why use disk space if you have physical memory to spare?&lt;BR /&gt;Raise WSMAX to be equal to physical memory.&lt;BR /&gt;And if you did that, there is no reason for a ramdisk: you have allowed the active processes to use the full physical memory, so why employ e detour to use part of that via a "disk" driver?&lt;BR /&gt;If that still does not help enough, you must revert back to the old IBM cure:&lt;BR /&gt;"Buy more memory"&lt;BR /&gt;&lt;BR /&gt;as an aside (from your profile)&lt;BR /&gt;&amp;gt;&amp;gt;&amp;gt;&lt;BR /&gt; I have assigned points to   0  of   20  responses to my questions.  &lt;BR /&gt;&amp;lt;&amp;lt;&amp;lt;&lt;BR /&gt;Please review&lt;BR /&gt;&lt;A href="http://forums1.itrc.hp.com/service/forums/helptips.do?#33" target="_blank"&gt;http://forums1.itrc.hp.com/service/forums/helptips.do?#33&lt;/A&gt;&lt;BR /&gt;about the way to say thanks in these forums.&lt;BR /&gt;&lt;BR /&gt;Proost.&lt;BR /&gt;&lt;BR /&gt;Have one on me.&lt;BR /&gt;&lt;BR /&gt;jpe</description>
      <pubDate>Thu, 19 Apr 2007 13:11:56 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/heavy-paging/m-p/3985047#M83279</guid>
      <dc:creator>Jan van den Ende</dc:creator>
      <dc:date>2007-04-19T13:11:56Z</dc:date>
    </item>
    <item>
      <title>Re: heavy paging</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/heavy-paging/m-p/3985048#M83280</link>
      <description>Yes a process can use multiple page files but why not just increase the WSDEF,WSQUOTA,WSEXTENT of that job. If you have the memory free for a DECram then use it for process working sets instead and avoid the paging overhead.&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Thu, 19 Apr 2007 13:12:03 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/heavy-paging/m-p/3985048#M83280</guid>
      <dc:creator>Ian Miller.</dc:creator>
      <dc:date>2007-04-19T13:12:03Z</dc:date>
    </item>
    <item>
      <title>Re: heavy paging</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/heavy-paging/m-p/3985049#M83281</link>
      <description>Andy,&lt;BR /&gt;&lt;BR /&gt;the remote and test system are configued &lt;BR /&gt;the same. AlphaServer DS20 500 MHz with&lt;BR /&gt;2gb memory. hsz70 controller. on weekend&lt;BR /&gt;the systems dedicated to this job. pql.s&lt;BR /&gt;&lt;BR /&gt;PQL_DWSDEFAULT               5936&lt;BR /&gt;PQL_MWSDEFAULT               5936&lt;BR /&gt;PQL_DWSQUOTA                11872&lt;BR /&gt;PQL_MWSQUOTA                11872&lt;BR /&gt;PQL_DWSEXTENT              786432&lt;BR /&gt;PQL_MWSEXTENT              786432&lt;BR /&gt;&lt;BR /&gt;the disk presented to the alpha is &lt;BR /&gt;mirrored COMPAQ BD009122C6 disks, 10k rpm&lt;BR /&gt;9.1 gig drives.&lt;BR /&gt;</description>
      <pubDate>Thu, 19 Apr 2007 13:13:28 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/heavy-paging/m-p/3985049#M83281</guid>
      <dc:creator>Dean McGorrill</dc:creator>
      <dc:date>2007-04-19T13:13:28Z</dc:date>
    </item>
    <item>
      <title>Re: heavy paging</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/heavy-paging/m-p/3985050#M83282</link>
      <description>What is the RDB job you mention actually doing?. What's your settings for database global buffers?. Direct I/O looks more of an issue than paging.</description>
      <pubDate>Thu, 19 Apr 2007 13:14:16 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/heavy-paging/m-p/3985050#M83282</guid>
      <dc:creator>Martin Hughes</dc:creator>
      <dc:date>2007-04-19T13:14:16Z</dc:date>
    </item>
    <item>
      <title>Re: heavy paging</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/heavy-paging/m-p/3985051#M83283</link>
      <description>Also, are these hard page faults? and is your process spending a lot of time in PFW state?.</description>
      <pubDate>Thu, 19 Apr 2007 13:29:50 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/heavy-paging/m-p/3985051#M83283</guid>
      <dc:creator>Martin Hughes</dc:creator>
      <dc:date>2007-04-19T13:29:50Z</dc:date>
    </item>
    <item>
      <title>Re: heavy paging</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/heavy-paging/m-p/3985052#M83284</link>
      <description>tx all,&lt;BR /&gt;     I will start with ws, (I thought 64k&lt;BR /&gt;was the max) jpe, points granted tx. upgrade&lt;BR /&gt;is not possible, but i'm working on mgmt.&lt;BR /&gt;&lt;BR /&gt;Dean&lt;BR /&gt;</description>
      <pubDate>Thu, 19 Apr 2007 13:31:12 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/heavy-paging/m-p/3985052#M83284</guid>
      <dc:creator>Dean McGorrill</dc:creator>
      <dc:date>2007-04-19T13:31:12Z</dc:date>
    </item>
    <item>
      <title>Re: heavy paging</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/heavy-paging/m-p/3985053#M83285</link>
      <description>&amp;gt;&amp;gt; o would putting up decram (license cost?) and put a pagefile&lt;BR /&gt;on the ram disk work? (we have 2gig of memory.)&lt;BR /&gt;&lt;BR /&gt;That remark might just qualify for an WTF entry.&lt;BR /&gt;Yeah, WTF suffegst 'What The F*&amp;amp;^', but is spelled as:&lt;BR /&gt;&lt;A href="http://worsethanfailure.com/Default.aspx" target="_blank"&gt;http://worsethanfailure.com/Default.aspx&lt;/A&gt;&lt;BR /&gt;&lt;BR /&gt;If you have the memory, just allow the process to use it directly! (WS quotas)&lt;BR /&gt;&lt;BR /&gt;Anyway, I understand that through running out of pgflquota you learned that paging might be an issue, but it does NOT appear to be the defining number for the performance here.&lt;BR /&gt;&lt;BR /&gt;The accountng data, which unfortunately did not contain the also elapsed/cpu time,  strongly suggests that DIRECT IO defines the performance. 16 Million IO in 33 hours is about 136 IO/sec.... when evenly spread.&lt;BR /&gt;If those are truly random, with single stream (1 batch job) driver) then that's about all you will ge no matter how many disks there are behind it. This is only the case if they are truly random, new READ IOs, meaning no IO cache has the data and none of the many disks is any closer to the target then any other.&lt;BR /&gt;With you I expect your storage system to perform better, but this may be all there is worst case.&lt;BR /&gt;&lt;BR /&gt;So now back the test system.&lt;BR /&gt;What do the accounting numbers look like there?&lt;BR /&gt;&lt;BR /&gt;Does it have EXACTLY the same database?&lt;BR /&gt;If it is 'very close' does it have exactly the same indexes defined.&lt;BR /&gt;Do both systems have much the same buffer pool defined for RDB?&lt;BR /&gt;&lt;BR /&gt;If you want to make a real impact on the performance of this job then yo probably need to look at RDB settings, and query tunings, not at OpenVMS tweaks.&lt;BR /&gt;&lt;BR /&gt;Hope this helps some,&lt;BR /&gt;Hein van den Heuvel (at gmail dot com)&lt;BR /&gt;HvdH Performance Consulting&lt;BR /&gt;</description>
      <pubDate>Thu, 19 Apr 2007 13:33:52 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/heavy-paging/m-p/3985053#M83285</guid>
      <dc:creator>Hein van den Heuvel</dc:creator>
      <dc:date>2007-04-19T13:33:52Z</dc:date>
    </item>
    <item>
      <title>Re: heavy paging</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/heavy-paging/m-p/3985054#M83286</link>
      <description>Dean,&lt;BR /&gt;&lt;BR /&gt;I concur with Hoff. My standard recommendation to clients is that if paging is perceived to be the problem, something else is actually the problem.&lt;BR /&gt;&lt;BR /&gt;Certainly, increasing the working set limits (and the corresponding page file quotas) to values more in line with physical memory is a good idea.&lt;BR /&gt;&lt;BR /&gt;Running data collection using T4 is always a good idea. Detailed review of what this data shows is also a sound idea. (Disclosure: We do perform this service for clients).&lt;BR /&gt;&lt;BR /&gt;- Bob Gezelter, &lt;A href="http://www.rlgsc.com" target="_blank"&gt;http://www.rlgsc.com&lt;/A&gt;</description>
      <pubDate>Thu, 19 Apr 2007 13:36:28 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/heavy-paging/m-p/3985054#M83286</guid>
      <dc:creator>Robert Gezelter</dc:creator>
      <dc:date>2007-04-19T13:36:28Z</dc:date>
    </item>
    <item>
      <title>Re: heavy paging</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/heavy-paging/m-p/3985055#M83287</link>
      <description>Dean,&lt;BR /&gt;&lt;BR /&gt;  On a 2GB system WSEXTENT=64000 is *tiny*. That's only 31MB. Less than most people's wristwatches. Fortunately AUTOGEN has overriden WSEXTENT to something more reasonable.&lt;BR /&gt;&lt;BR /&gt;Check the VIRTPEAK for the process to get an idea of how large it wants to be.&lt;BR /&gt;&lt;BR /&gt;However, even with low WSEXTENTs, OpenVMS is quite good at handling large virtual address spaces efficiently. In a recent capacity test on a system with 4GB and WSMAX at 1.5GB, a process with 2.5GB of virtual address space, reading through it all several times, sustained a (soft) fault rate of &amp;gt;100,000 per SECOND for more than 5 minutes. Amazing! They were all modified list faults, in effect OpenVMS was using the modified page list as an extension of the working set of the process. In comparison with your job, we were experiencing double your TOTAL number of pagefaults for your entire 33 hour job every second!&lt;BR /&gt;&lt;BR /&gt;Soft faults are cheap. Eliminating them is easy (just increase process working sets), but is unlikely to return a significant performance improvement.&lt;BR /&gt;&lt;BR /&gt;&amp;gt;put a pagefile on the ram disk work? &lt;BR /&gt;&amp;gt;(we have 2gig of memory.)&lt;BR /&gt;&lt;BR /&gt;  Direct answer - NO! This doesn't make sense! The idea of a page file is a place to put stuff that doesn't fit in physical memory. Putting the pagefile itself in physical memory is like having too much stuff in your garage, so you build a shed INSIDE the garage to to put the excess in. Can you see why that won't help?&lt;BR /&gt;&lt;BR /&gt;  What MIGHT work would be to build a RAM disk and put the DATA FILES on it, to reduce the cost of all those direct I/Os.&lt;BR /&gt;&lt;BR /&gt;  The other consideration, you haven't said how much CPU time the job took. I can't see how those paging and I/O stats can account for 33 hours. The CPU usage will give you a lower limit to the run time. If it's high, you should be looking at the code to see if there are faster algorithms to achieve what you're trying to do.&lt;BR /&gt;</description>
      <pubDate>Thu, 19 Apr 2007 19:40:12 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/heavy-paging/m-p/3985055#M83287</guid>
      <dc:creator>John Gillings</dc:creator>
      <dc:date>2007-04-19T19:40:12Z</dc:date>
    </item>
    <item>
      <title>Re: heavy paging</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/heavy-paging/m-p/3985056#M83288</link>
      <description>Hi John,&lt;BR /&gt;   its been a dozen years since I worked with this stuff. tuned up our build system&lt;BR /&gt;and then it was coding thereafter..&lt;BR /&gt;   the CPU was 5 hours and some change for&lt;BR /&gt;both the production and test system. From&lt;BR /&gt;accounting..&lt;BR /&gt;&lt;BR /&gt;Peak working set:      535072&lt;BR /&gt;&lt;BR /&gt;that looks like it blew past its wsextent&lt;BR /&gt;limit anyway. also from uaf, my account wsextent..&lt;BR /&gt;&lt;BR /&gt; WSextent:     16384&lt;BR /&gt;$ sho work&lt;BR /&gt;  Working Set (pagelets)  /Limit=5936  /Quota=16384  /Extent=786432&lt;BR /&gt;  Adjustment enabled      Authorized Quota=16384  Authorized Extent=786432&lt;BR /&gt;&lt;BR /&gt;it says I have what wsmax is set to. (?)&lt;BR /&gt;&lt;BR /&gt;&amp;gt;What MIGHT work would be to build a RAM disk and put the DATA FILES on it&lt;BR /&gt;&lt;BR /&gt;thats an idea! anyway i've upped the quotas&lt;BR /&gt;and raised wsinc for a test run. &lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Fri, 20 Apr 2007 11:18:55 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/heavy-paging/m-p/3985056#M83288</guid>
      <dc:creator>Dean McGorrill</dc:creator>
      <dc:date>2007-04-20T11:18:55Z</dc:date>
    </item>
    <item>
      <title>Re: heavy paging</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/heavy-paging/m-p/3985057#M83289</link>
      <description>Hello,&lt;BR /&gt;&lt;BR /&gt;It would be well worth your while getting some data using T4 - then you can see what's really going on in terms of IO, paging, file opens, locking, lock migration between nodes, network traffic etc.&lt;BR /&gt;&lt;BR /&gt;Can you provide some configuration data too - machine type, VMS version, disc subsystem etc. as well please?&lt;BR /&gt;&lt;BR /&gt;It's also probably worth looking at XFC usage as well - 2Gbytes isn't that much memory, so with some RDB tuning and some VMS tuning, combined with extra memory - you may see a decent difference. It all depends on the workload and how the RDB job functions.&lt;BR /&gt;&lt;BR /&gt;You may find that some small changes to the way the RDB job is written can provide some big changes. Sometimes relatively small changes to code can provide big wins. Data from something like T4 will give you a starting point for a thorough investigation.&lt;BR /&gt;&lt;BR /&gt;Cheers, Colin (&lt;A href="http://www.xdelta.co.uk)." target="_blank"&gt;http://www.xdelta.co.uk).&lt;/A&gt;&lt;BR /&gt;</description>
      <pubDate>Fri, 20 Apr 2007 11:27:02 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/heavy-paging/m-p/3985057#M83289</guid>
      <dc:creator>Colin Butcher</dc:creator>
      <dc:date>2007-04-20T11:27:02Z</dc:date>
    </item>
    <item>
      <title>Re: heavy paging</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/heavy-paging/m-p/3985058#M83290</link>
      <description>Dean,&lt;BR /&gt;&lt;BR /&gt;please consider using T4 to collect performance data on both systems and start collecting the data NOW, before you change lots of parameters.&lt;BR /&gt;&lt;BR /&gt;TLviz (the T4 data visualizer) contains marvellous features for comparing performance data in a before-after analysis.&lt;BR /&gt;&lt;BR /&gt;If the accounting data shown is from the ONLY process involved in this 'large RDB job', then the direct IOs seems to be the major factor. T4 will also give you system-wide performacen data and you may be able to easily 'see' other factors influencing performance.&lt;BR /&gt;&lt;BR /&gt;Volker.</description>
      <pubDate>Fri, 20 Apr 2007 11:55:06 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/heavy-paging/m-p/3985058#M83290</guid>
      <dc:creator>Volker Halle</dc:creator>
      <dc:date>2007-04-20T11:55:06Z</dc:date>
    </item>
    <item>
      <title>Re: heavy paging</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/heavy-paging/m-p/3985059#M83291</link>
      <description>hi Colin, the cpu &amp;amp; disks are described a few posts ago.&lt;BR /&gt;&lt;BR /&gt;Volker,&lt;BR /&gt;     today is the first time I'm watching this running live, and its not what I thought it was, paging. though its using up pgflquota. using the normal tools, monitor sda etc. I've yet to see a pfw. its all direct i/o as most have said. we slowly&lt;BR /&gt;increase ws, even though I set wsinc to 48k (?) &lt;BR /&gt;&lt;BR /&gt;     they have about 20 files all on one&lt;BR /&gt;disk open. I've caught a few that are busy&lt;BR /&gt;with sda sho proc/chan. is there a tool&lt;BR /&gt;to show what files are getting hit the&lt;BR /&gt;most? &lt;BR /&gt;&lt;BR /&gt;t4 sounds like a  very useful tool, I found the kit but this old 7.2-1 system I don't think product can read a compressed pcsi file. is there a uncompressed one out there?&lt;BR /&gt;If I find out the hot files, then splitting&lt;BR /&gt;them out to other spindles should help. &lt;BR /&gt;</description>
      <pubDate>Fri, 20 Apr 2007 13:13:18 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/heavy-paging/m-p/3985059#M83291</guid>
      <dc:creator>Dean McGorrill</dc:creator>
      <dc:date>2007-04-20T13:13:18Z</dc:date>
    </item>
    <item>
      <title>Re: heavy paging</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/heavy-paging/m-p/3985060#M83292</link>
      <description>&amp;gt; is there a tool to show what files are&lt;BR /&gt;&amp;gt; getting hit the most? &lt;BR /&gt;&lt;BR /&gt;You've got at least one - SDA.&lt;BR /&gt;&lt;BR /&gt;$ analyze/system&lt;BR /&gt;SDA&amp;gt; read sysdef&lt;BR /&gt;SDA&amp;gt; show proc/chan/id=xxpidxxx&lt;BR /&gt;SDA&amp;gt; ! use the addresses in the window column&lt;BR /&gt;SDA&amp;gt; ! and note the wcb$l_reads and wcb$l_writes&lt;BR /&gt;SDA&amp;gt; format/type=wcb xxwcbxxx</description>
      <pubDate>Fri, 20 Apr 2007 13:29:22 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/heavy-paging/m-p/3985060#M83292</guid>
      <dc:creator>Jim_McKinney</dc:creator>
      <dc:date>2007-04-20T13:29:22Z</dc:date>
    </item>
    <item>
      <title>Re: heavy paging</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/heavy-paging/m-p/3985061#M83293</link>
      <description>&lt;BR /&gt;Dean,&lt;BR /&gt;&lt;BR /&gt;You indicated it is an RDB application.&lt;BR /&gt;&lt;BR /&gt;So, for now, forget about tuning OpenVMS! just ask RDB where it hurts!&lt;BR /&gt;It can tell you which files are busy, &lt;BR /&gt;which files are waited for most...&lt;BR /&gt;&lt;BR /&gt;Poke around with RMU&lt;BR /&gt;Look as RMU&amp;gt; SHOW STAT/STALL&lt;BR /&gt;&lt;BR /&gt;Check the "Rdb7 Guide to Database Performance and Tuning"&lt;BR /&gt;&lt;BR /&gt;No point in speeding up an IO which should not be done in the first place!&lt;BR /&gt;&lt;BR /&gt;Give some more memory to the RDB caches!?&lt;BR /&gt;&lt;BR /&gt;Use SHOW MEM/CACH=(TOPQIO=20,VOLUME=...) for a simple, cheap, OS hotfile list.&lt;BR /&gt;... if you have XFC caching going.&lt;BR /&gt;&lt;BR /&gt;Or like you did, do a MONI  CLUS or MONI DISK/TOPQIO during the run and see the top busy disk(s). &lt;BR /&gt;Now SHOW DEV/FILE for a first impression... a file must be open to be busy!&lt;BR /&gt;&lt;BR /&gt;Having said that, if like you seem to indicate all the open file for the job are on a single disk then you may want to address that first without even before learning more about the load.&lt;BR /&gt;&lt;BR /&gt;Personally, I like the SAME approach. Stripe And Mirror Everything. &lt;BR /&gt;'Don't worry your pretty head' about which file is busy or how to exactly balance the IO. Just brute-force spread it out if you can! This is trivial (default) on the EVA specifically, but straighforward on the HSZ70 as well allthough you can not go as wide.&lt;BR /&gt;&lt;BR /&gt;An other this to look at his that HSZ.&lt;BR /&gt;Run the display there and watch which disks are being hit. This will also nicely give a read-write ratios and HSZ cache information.&lt;BR /&gt;&lt;BR /&gt;hmmm.... are the HSZ batteries on the production system working properly? If not, the HSZ will disable the write-back caching and WRITE performance will suggest tremendeously. That can easily explain several hours.&lt;BR /&gt;&lt;BR /&gt;Cheers,&lt;BR /&gt;&lt;BR /&gt;Hope this helps some,&lt;BR /&gt;Hein van den Heuvel (at gmail dot com)&lt;BR /&gt;HvdH Performance Consulting&lt;BR /&gt;</description>
      <pubDate>Fri, 20 Apr 2007 13:58:41 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/heavy-paging/m-p/3985061#M83293</guid>
      <dc:creator>Hein van den Heuvel</dc:creator>
      <dc:date>2007-04-20T13:58:41Z</dc:date>
    </item>
    <item>
      <title>Re: heavy paging</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/heavy-paging/m-p/3985062#M83294</link>
      <description>tx Jim,&lt;BR /&gt;     hot files identified.&lt;BR /&gt;tx Hein,&lt;BR /&gt;      yes the cache batts are good and&lt;BR /&gt;I will chat with our dba about rdb tuning.&lt;BR /&gt;&lt;BR /&gt;dean</description>
      <pubDate>Fri, 20 Apr 2007 17:01:36 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/heavy-paging/m-p/3985062#M83294</guid>
      <dc:creator>Dean McGorrill</dc:creator>
      <dc:date>2007-04-20T17:01:36Z</dc:date>
    </item>
  </channel>
</rss>

