<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: poor performance after memory upgrade in Operating System - OpenVMS</title>
    <link>https://community.hpe.com/t5/operating-system-openvms/poor-performance-after-memory-upgrade/m-p/3282154#M64236</link>
    <description>Jan,&lt;BR /&gt;&lt;BR /&gt;Isn't it PFRATH that must be set to 0 ?&lt;BR /&gt;If the number of pagefaults per 100 ms is higher that this value (8) then you get wsinc (2400) pages extra. But this means after 0.1 second !!! And if you need e.g. 30.000 pages, you will need a whole second.&lt;BR /&gt;&lt;BR /&gt;Wim&lt;BR /&gt;&lt;BR /&gt;</description>
    <pubDate>Thu, 27 May 2004 10:32:31 GMT</pubDate>
    <dc:creator>Wim Van den Wyngaert</dc:creator>
    <dc:date>2004-05-27T10:32:31Z</dc:date>
    <item>
      <title>poor performance after memory upgrade</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/poor-performance-after-memory-upgrade/m-p/3282112#M64194</link>
      <description>Hi Guys,&lt;BR /&gt;&lt;BR /&gt;Upgraded my memory today but hasnt really improved as much as i thought it would.Had 4GB now got 8GB with 4 way interleving,i had hoped that the page faults would decrease significantly but they havent any ideas??&lt;BR /&gt;I will run autogen again tomorrow after 24hrs of use and see what happens.</description>
      <pubDate>Thu, 20 May 2004 10:13:48 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/poor-performance-after-memory-upgrade/m-p/3282112#M64194</guid>
      <dc:creator>Peter Clarke</dc:creator>
      <dc:date>2004-05-20T10:13:48Z</dc:date>
    </item>
    <item>
      <title>Re: poor performance after memory upgrade</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/poor-performance-after-memory-upgrade/m-p/3282113#M64195</link>
      <description>Peter,&lt;BR /&gt;you might have to increase process working set sizes to reduce page faulting. Without that and even with more main memory you might just turn your 'hard' faults (to/from disk) against 'soft' faults (to/from memory). Soft faults are still better than hard faults.&lt;BR /&gt;&lt;BR /&gt;You will still have paging activity:&lt;BR /&gt;e.g. there is no special 'image loader' - they are brought into memory through the normal demand paging.</description>
      <pubDate>Thu, 20 May 2004 11:09:15 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/poor-performance-after-memory-upgrade/m-p/3282113#M64195</guid>
      <dc:creator>Uwe Zessin</dc:creator>
      <dc:date>2004-05-20T11:09:15Z</dc:date>
    </item>
    <item>
      <title>Re: poor performance after memory upgrade</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/poor-performance-after-memory-upgrade/m-p/3282114#M64196</link>
      <description>How would i know how much to increase the process working sets by??&lt;BR /&gt;&lt;BR /&gt;The average page fault rate is currently 354.62.&lt;BR /&gt;And on average 22.50% of these are hard faults.&lt;BR /&gt;&lt;BR /&gt;Reg&lt;BR /&gt;Pete</description>
      <pubDate>Mon, 16 Sep 2024 09:18:50 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/poor-performance-after-memory-upgrade/m-p/3282114#M64196</guid>
      <dc:creator>Peter Clarke</dc:creator>
      <dc:date>2024-09-16T09:18:50Z</dc:date>
    </item>
    <item>
      <title>Re: poor performance after memory upgrade</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/poor-performance-after-memory-upgrade/m-p/3282115#M64197</link>
      <description>I would depend on AUTOGEN to tune the params and run it as often as possible before making any changes&lt;BR /&gt;&lt;BR /&gt;regards&lt;BR /&gt;Mobeen</description>
      <pubDate>Thu, 20 May 2004 12:01:50 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/poor-performance-after-memory-upgrade/m-p/3282115#M64197</guid>
      <dc:creator>Mobeen_1</dc:creator>
      <dc:date>2004-05-20T12:01:50Z</dc:date>
    </item>
    <item>
      <title>Re: poor performance after memory upgrade</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/poor-performance-after-memory-upgrade/m-p/3282116#M64198</link>
      <description>Peter,&lt;BR /&gt;&lt;BR /&gt;your number of PAGEFAULTS almost certainly stays nearly the same after just adding more memory (assuming the same workload).&lt;BR /&gt;(What WILL change, is the nature of the pagefaults, and there SHOULD be performance gain.)&lt;BR /&gt;The first time any page is needed, ofcourse it will generate a fault, and have to be brought in from disk. If a page LEAVES a working set, initially it will stay in memory, and be added to the free page list, First In, First Out. (modified pages go to the modified page list, and from there may also land on the free page list). VMS DOES keep track of them. After some time, all pages on the free list are pages that have been used before. Now if new pages need to be brought in, the oldest pages of the free list get overwritten with the new data, and only now are they truely gone from memory.&lt;BR /&gt;This becomes highly relevant when (the data of) a page that was moved to the free list is needed again. In that case, VMS knows it is still in memory, and simply changes some pointers, and the page is in the requiring working set again!. Just changing some pointers in various page-tables is really very, very much faster than reading the page from disk, and then also adjust an equal amount of pointers!&lt;BR /&gt;The first is a 'hard' pagefault, the second a 'soft' one.&lt;BR /&gt;Now, if you have more memory, many more pages stay in memory, so much less pagefaults are hard.&lt;BR /&gt;This even becomes much more pronounced when an IO cache becomes active (XFC in V7.3-x, VIOC in 7.2-x, third party before that). Very simplified, an IO cache does not only know a processes's pages, but knows which diskblocks are where in memory. Therefore, a block that was already read for a completely unrelated process, or even an already-ended one, might already/still be in memory, and  no disk IO is needed. Sadly for statistics,  for VMS this still represents a (very fast) hard fault, you need the reporting tool of the cache to differentiate those. &lt;BR /&gt;If you want to monitor the different fault types, in the Pagefault subscreen of MONITOR SYSTEM a "|" (pipe) separates hard (left of it) from soft (right) faults.&lt;BR /&gt;Monitor PAGES shows more detail:&lt;BR /&gt;The whole second block is various types of soft faults.&lt;BR /&gt;Special case of hard fault: if a process has its maximum working set and needs more, pages 'fall out' of the working set. If they have been modified, they move to the ModifiedPageList. If THAT is filled over its treshold, a large chunck get written to the pagefile, and if it is needed again, that ALSO is a hard fault. (well, the page might still be at the free list, but you get the idea) So, as long as there is enough memory, it is advantageous to have a large ModifiedPageList. &lt;BR /&gt;&lt;BR /&gt;Since you HAVE extra memory, it MIGHT stll be well worth trying to decrase pagefaults however.&lt;BR /&gt;Although the overhead by soft faults is MUCH less than by hard faults, it _IS_ overhead.&lt;BR /&gt;So, if you can identify processes that DO incur many soft faults, it will be worth the trouble of assigning them larger working sets.&lt;BR /&gt;Quick and easy: just give everyone more WSQUOTA &amp;amp; WSEXTEND (check WSMAX as well, because that sets a hard upper limit).&lt;BR /&gt;If you suspect a process of excessive soft faulting: SHO PROC/ID=../CONT and watch the working set. If that stays under WSEXTENT, there is no gain. But if it hovers near WSEXTENT, and the pagefaults are running, there may be performance gain available by raising WSEXTENT.&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;hth.&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;Jan&lt;BR /&gt;</description>
      <pubDate>Thu, 20 May 2004 12:02:03 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/poor-performance-after-memory-upgrade/m-p/3282116#M64198</guid>
      <dc:creator>Jan van den Ende</dc:creator>
      <dc:date>2004-05-20T12:02:03Z</dc:date>
    </item>
    <item>
      <title>Re: poor performance after memory upgrade</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/poor-performance-after-memory-upgrade/m-p/3282117#M64199</link>
      <description>Peter,&lt;BR /&gt;&lt;BR /&gt;&amp;gt; The average page fault rate is currently 354.62.&lt;BR /&gt;&amp;gt; And on average 22.50% of these are hard faults.&lt;BR /&gt;&lt;BR /&gt;What concerns me greatly is a hard fault rate of 22.50% or roughly 79/sec. This is extremely high for almost any Alpha environment.&lt;BR /&gt;&lt;BR /&gt;When tuning systems, I look for a hard fault rate over 10/sec (which may or may not be a problem).&lt;BR /&gt;&lt;BR /&gt;After doing what has already been suggested, do you still see a high page fault rate?  &lt;BR /&gt;&lt;BR /&gt;If so, hard faults will become the focus for investigating peformance. If you do a &lt;BR /&gt;$ monitor page&lt;BR /&gt;and run it for a few minutes, this would be a good start toward determining what action needs to be taken. &lt;BR /&gt;&lt;BR /&gt;For the moment, it doesn't really matter if your soft faults are over 1,000/sec. If your hard faults are high, there are techniques to remedy this. Some of them are involved, some are quite easy, just depends.&lt;BR /&gt;&lt;BR /&gt;If your system still isn't were you think it should be, let us know what the above $ monitor page looks like (a $ show memory wouldn't hurt either). &lt;BR /&gt;&lt;BR /&gt;john</description>
      <pubDate>Thu, 20 May 2004 23:21:47 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/poor-performance-after-memory-upgrade/m-p/3282117#M64199</guid>
      <dc:creator>John Eerenberg</dc:creator>
      <dc:date>2004-05-20T23:21:47Z</dc:date>
    </item>
    <item>
      <title>Re: poor performance after memory upgrade</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/poor-performance-after-memory-upgrade/m-p/3282118#M64200</link>
      <description>Hi John,&lt;BR /&gt;&lt;BR /&gt;I have attached the output from 'show mem' i have also attached an excel file of the alpha statistics that i have been monitoring over the previous week or so.&lt;BR /&gt;In reply to the previous question it seems to be all processes that are page faulting rather than just a few.Should i increase all process working sets?&lt;BR /&gt;Hope you can help.&lt;BR /&gt;&lt;BR /&gt;Thanks&lt;BR /&gt;&lt;BR /&gt;Peter</description>
      <pubDate>Fri, 21 May 2004 03:18:53 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/poor-performance-after-memory-upgrade/m-p/3282118#M64200</guid>
      <dc:creator>Peter Clarke</dc:creator>
      <dc:date>2004-05-21T03:18:53Z</dc:date>
    </item>
    <item>
      <title>Re: poor performance after memory upgrade</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/poor-performance-after-memory-upgrade/m-p/3282119#M64201</link>
      <description>Here is 'show mem'&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Fri, 21 May 2004 03:19:40 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/poor-performance-after-memory-upgrade/m-p/3282119#M64201</guid>
      <dc:creator>Peter Clarke</dc:creator>
      <dc:date>2004-05-21T03:19:40Z</dc:date>
    </item>
    <item>
      <title>Re: poor performance after memory upgrade</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/poor-performance-after-memory-upgrade/m-p/3282120#M64202</link>
      <description>Peter,&lt;BR /&gt;&lt;BR /&gt;I'm on the road right now so I don't have excel at this time. However, the show mem is interesting. You seem to have plently of memory on the freelist. The modified list might be on the small size.&lt;BR /&gt;&lt;BR /&gt;Out of several posibilities, your modified page list may be too small (check page write IO's). Another posibility is that you have a lot of image activationss. Do you have a lot of people (or processes) logging in/out? Or running .COM over and over?&lt;BR /&gt;&lt;BR /&gt;Sorry I can't provide more. Maybe someone else can chime in . . .&lt;BR /&gt;&lt;BR /&gt;john</description>
      <pubDate>Fri, 21 May 2004 09:51:49 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/poor-performance-after-memory-upgrade/m-p/3282120#M64202</guid>
      <dc:creator>John Eerenberg</dc:creator>
      <dc:date>2004-05-21T09:51:49Z</dc:date>
    </item>
    <item>
      <title>Re: poor performance after memory upgrade</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/poor-performance-after-memory-upgrade/m-p/3282121#M64203</link>
      <description>Sorry John,&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;there is NO problem there!&lt;BR /&gt;&lt;BR /&gt;&lt;SNIP&gt;&lt;BR /&gt;&lt;BR /&gt;DISK$ALPHASYS:[SYS0.SYSEXE]PAGEFILE.SYS                                       &lt;BR /&gt;                                                 254      524792      524792&lt;BR /&gt;&lt;BR /&gt;&lt;/SNIP&gt;&lt;BR /&gt;&lt;BR /&gt;Look like wrapping unpleasantly, but anyway:&lt;BR /&gt;free pagefile pages = total pagefile pages,&lt;BR /&gt;meaning:  NO modified pages in pagefile, ie, NO problems there!&lt;BR /&gt;&lt;BR /&gt;(but Peter, I WAS planning to ask for this, so that's covered as well now)&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;Jan</description>
      <pubDate>Fri, 21 May 2004 11:52:33 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/poor-performance-after-memory-upgrade/m-p/3282121#M64203</guid>
      <dc:creator>Jan van den Ende</dc:creator>
      <dc:date>2004-05-21T11:52:33Z</dc:date>
    </item>
    <item>
      <title>Re: poor performance after memory upgrade</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/poor-performance-after-memory-upgrade/m-p/3282122#M64204</link>
      <description>Process creation will generate hard page faults.  Do you create a lot of processes?&lt;BR /&gt;&lt;BR /&gt;Also image activation will cause hard page faults.  Are the images that get run frequently installed</description>
      <pubDate>Fri, 21 May 2004 11:56:16 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/poor-performance-after-memory-upgrade/m-p/3282122#M64204</guid>
      <dc:creator>Cass Witkowski</dc:creator>
      <dc:date>2004-05-21T11:56:16Z</dc:date>
    </item>
    <item>
      <title>Re: poor performance after memory upgrade</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/poor-performance-after-memory-upgrade/m-p/3282123#M64205</link>
      <description>Peter,&lt;BR /&gt;&lt;BR /&gt;One more thing.&lt;BR /&gt;&lt;BR /&gt;I noted &lt;BR /&gt;&lt;BR /&gt;&lt;SNIP&gt;&lt;BR /&gt; Vols in Full XFC mode              0    Vols in VIOC Compatible mode         6&lt;BR /&gt;/snip&amp;gt;&lt;BR /&gt;&lt;BR /&gt;This DOES mean you are on a 7.3(-x).&lt;BR /&gt;The VIOC compatibility mode however is mainly for use during rolling upgrade (and you don't want XFC if you are in 7.3  (no -), without the recent patches!&lt;BR /&gt;&lt;BR /&gt;I'm at home now, no docs readily at hand, but (at least!) the Installation/Upgrade manuals has a chapter on moving from VIOC to XFC.&lt;BR /&gt;You will want to move to full XFC mode.&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;Well, I better have another strong look, the longer one tries, the more one finds...&lt;BR /&gt;&lt;BR /&gt;Jan&lt;/SNIP&gt;</description>
      <pubDate>Fri, 21 May 2004 12:00:52 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/poor-performance-after-memory-upgrade/m-p/3282123#M64205</guid>
      <dc:creator>Jan van den Ende</dc:creator>
      <dc:date>2004-05-21T12:00:52Z</dc:date>
    </item>
    <item>
      <title>Re: poor performance after memory upgrade</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/poor-performance-after-memory-upgrade/m-p/3282124#M64206</link>
      <description>Hi Peter,&lt;BR /&gt;lookin into yours attachments, I guess:&lt;BR /&gt;- your system makes a lot of I/O (due DBMS?);&lt;BR /&gt;- I/O have some hit in specific days (for example May, 7th and May, 10th morning);&lt;BR /&gt;- you have 4 GB of cache (50% of RAM) to increase I/O performance;&lt;BR /&gt;- your page file is free;&lt;BR /&gt; &lt;BR /&gt;I think, your bottleneck is not page fault but I/O; if you have a DBMS you could increase working set of DBMS process; monit this process that's critical for you.&lt;BR /&gt; &lt;BR /&gt;@Antoniov&lt;BR /&gt;</description>
      <pubDate>Sat, 22 May 2004 04:03:32 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/poor-performance-after-memory-upgrade/m-p/3282124#M64206</guid>
      <dc:creator>Antoniov.</dc:creator>
      <dc:date>2004-05-22T04:03:32Z</dc:date>
    </item>
    <item>
      <title>Re: poor performance after memory upgrade</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/poor-performance-after-memory-upgrade/m-p/3282125#M64207</link>
      <description>I think Cass has the most pertinent reply.&lt;BR /&gt;The most likely cause for the hard fault rate is Image activation.&lt;BR /&gt;&lt;BR /&gt;Do you have the Oracle images installed, and installed properly to maximize sharing?&lt;BR /&gt;What lives on DKA1? Is that $ORACLE_HOME/bin?&lt;BR /&gt;&lt;BR /&gt;What you also may want to do it correlate the known Oracle IOs (statspack or some such)with the measured IO to help explain parts.&lt;BR /&gt;&lt;BR /&gt;I think that the most interesting lines from the Show memory are: "Extended File Cache  (Time of last reset: 20-MAY-2004 13:37:53.05)&lt;BR /&gt; Allocated (GBytes)              3.46    Maximum size (GBytes)             4.00&lt;BR /&gt;&lt;BR /&gt;So the system decided (probably rightly so), that it had nothing better to do with it's free memory then to give all of it to the XFC cache. For Oracle heavy applications, that is often a waste (double buffering). You are better of giving it to the SGA (which you already did, and nicely so in the reservered memory, but maybe you can now give it more?&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;Back to the 'frequent image activations'. Coudl that be the case? Can you influence that? Use some parameter in the application to keep connections (to Oracle) open? Use MTS to keep a pool of slaves ready to be used?&lt;BR /&gt;&lt;BR /&gt;Get yourself a 'hot file' package of some sore, or dive into the XFC details showing files in details to understand where (pagefault) IOs are being resolved. SYS$SYSTEM:SET.EXE? LOGINOUT? ORACLE.EXE?....&lt;BR /&gt;&lt;BR /&gt;Finally I would recommend you check the (SY)LOGIN.COM and such that get involved when an Oracle slave is activate. The steps leading to an Oracle slave activation should be made very lean and mean. Junk any 'set proc/name' no terminal testsing/setting, clean out that path... Oracle adds enough steps already!&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;Hope this helps,&lt;BR /&gt;Hein.&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Sat, 22 May 2004 12:28:30 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/poor-performance-after-memory-upgrade/m-p/3282125#M64207</guid>
      <dc:creator>Hein van den Heuvel</dc:creator>
      <dc:date>2004-05-22T12:28:30Z</dc:date>
    </item>
    <item>
      <title>Re: poor performance after memory upgrade</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/poor-performance-after-memory-upgrade/m-p/3282126#M64208</link>
      <description>Thanks i will try a few of these ideas.&lt;BR /&gt;&lt;BR /&gt;Just a couple of questions:&lt;BR /&gt;How can i take some memory away from the xfc cache and give it to the SGA.&lt;BR /&gt;&lt;BR /&gt;Have atached the agen$params.report file it has a few warnings in are these anything to worry about??&lt;BR /&gt;&lt;BR /&gt;Thanks&lt;BR /&gt;&lt;BR /&gt;Peter&lt;BR /&gt;</description>
      <pubDate>Mon, 24 May 2004 03:37:00 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/poor-performance-after-memory-upgrade/m-p/3282126#M64208</guid>
      <dc:creator>Peter Clarke</dc:creator>
      <dc:date>2004-05-24T03:37:00Z</dc:date>
    </item>
    <item>
      <title>Re: poor performance after memory upgrade</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/poor-performance-after-memory-upgrade/m-p/3282127#M64209</link>
      <description>Here is setparams file aswell just incase anything is set wrong....</description>
      <pubDate>Mon, 24 May 2004 03:37:59 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/poor-performance-after-memory-upgrade/m-p/3282127#M64209</guid>
      <dc:creator>Peter Clarke</dc:creator>
      <dc:date>2004-05-24T03:37:59Z</dc:date>
    </item>
    <item>
      <title>Re: poor performance after memory upgrade</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/poor-performance-after-memory-upgrade/m-p/3282128#M64210</link>
      <description>Peter,&lt;BR /&gt;&lt;BR /&gt;Not sure how to allocate memory to Oracle, but you can limit the memory allocation of XFC by setting VCC_MAX_CACHE via SYSGEN. This parameter is dynamic so should kick in without a reboot. The default is half the physical memory.&lt;BR /&gt;&lt;BR /&gt;As previously mentioned try and install frequently used images. &lt;BR /&gt;&lt;BR /&gt;regards&lt;BR /&gt;&lt;BR /&gt;Brian&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Mon, 24 May 2004 04:03:06 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/poor-performance-after-memory-upgrade/m-p/3282128#M64210</guid>
      <dc:creator>Brian Reiter</dc:creator>
      <dc:date>2004-05-24T04:03:06Z</dc:date>
    </item>
    <item>
      <title>Re: poor performance after memory upgrade</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/poor-performance-after-memory-upgrade/m-p/3282129#M64211</link>
      <description>Peter:&lt;BR /&gt;&lt;BR /&gt;Autogen report:&lt;BR /&gt;&lt;BR /&gt;-There are two parameter name error reports, coming from MODPARAMS. They are simple typos, but correct them anyway.&lt;BR /&gt;-A lot of parameters have multiple entries.&lt;BR /&gt;AUTOGEN overwrites any previous value with the latest (they are just temporary symbols), but for human clarity it is much more handy to have those weeded out a bit.&lt;BR /&gt;&lt;BR /&gt;Apart from that, I noted no immediately obvisous things That's to be expected with uptime &amp;lt; 24 hours; this may (but needs not) change upon longer uptime.&lt;BR /&gt;&lt;BR /&gt;Taking memory from XFC to give to SGA:&lt;BR /&gt;it's the other way around. You GIVE memory to SGA, (and FREELIST, and MODIFIED LIST, and whatever..) and XFC is ALLOWED to use the remainder.&lt;BR /&gt;I'm not familiar with the latest ORACLEs, but way back when the systemmanager (in cooperation with the DBA) defined the SGASIZE in ORA.INI. &lt;BR /&gt;(If that has changed, anyone on current OACLE please correct me; but there will undoubtably be a way to tell ORACLE what size SGA to initialize).&lt;BR /&gt;&lt;BR /&gt;hth&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;Jan</description>
      <pubDate>Mon, 24 May 2004 04:09:01 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/poor-performance-after-memory-upgrade/m-p/3282129#M64211</guid>
      <dc:creator>Jan van den Ende</dc:creator>
      <dc:date>2004-05-24T04:09:01Z</dc:date>
    </item>
    <item>
      <title>Re: poor performance after memory upgrade</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/poor-performance-after-memory-upgrade/m-p/3282130#M64212</link>
      <description>Hi Peter,&lt;BR /&gt;I preliminarly looked into yours attachment and it seems there is no significative update because your system worked too little time for autogen.&lt;BR /&gt;Look about SCSSYSTEMID and SCSNODE: you could find this value more times in your MODPARAMS.DAT; however it you node is "EURAXP" and DecNet Address is 1.405&lt;BR /&gt;GBLPAGES and GBLSECTIONS will be increased by autogen; I think you will not have some benefit from this.&lt;BR /&gt;May be better result from MIN_VIRTULAPAGECNT and MIN_WSMAX values.&lt;BR /&gt;I think it's better repeat autogen after 2/3 days of working.&lt;BR /&gt; &lt;BR /&gt;Antonio Vigliotti&lt;BR /&gt;</description>
      <pubDate>Mon, 24 May 2004 04:12:50 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/poor-performance-after-memory-upgrade/m-p/3282130#M64212</guid>
      <dc:creator>Antoniov.</dc:creator>
      <dc:date>2004-05-24T04:12:50Z</dc:date>
    </item>
    <item>
      <title>Re: poor performance after memory upgrade</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/poor-performance-after-memory-upgrade/m-p/3282131#M64213</link>
      <description>You metion about installing frequently used image's,how can i find out which ones are frequently used and how would i install them??&lt;BR /&gt;Sorry if i sound a bit dim but this is all relatively new to me...&lt;BR /&gt;&lt;BR /&gt;Reg&lt;BR /&gt;&lt;BR /&gt;Peter&lt;BR /&gt;</description>
      <pubDate>Mon, 24 May 2004 04:27:00 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/poor-performance-after-memory-upgrade/m-p/3282131#M64213</guid>
      <dc:creator>Peter Clarke</dc:creator>
      <dc:date>2004-05-24T04:27:00Z</dc:date>
    </item>
  </channel>
</rss>

