<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: Higher memory utilization after OS upgrade in Operating System - OpenVMS</title>
    <link>https://community.hpe.com/t5/operating-system-openvms/higher-memory-utilization-after-os-upgrade/m-p/3967071#M82702</link>
    <description>In contrast with what is being said here, I found in 2004 that XFC memory is NOT released when applications need it. Don't have the data anymore but perhaps it's better to test it.&lt;BR /&gt;&lt;BR /&gt;WIm</description>
    <pubDate>Fri, 23 Mar 2007 02:43:11 GMT</pubDate>
    <dc:creator>Wim Van den Wyngaert</dc:creator>
    <dc:date>2007-03-23T02:43:11Z</dc:date>
    <item>
      <title>Higher memory utilization after OS upgrade</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/higher-memory-utilization-after-os-upgrade/m-p/3967059#M82690</link>
      <description>Hi folks,&lt;BR /&gt;&lt;BR /&gt;I would like to ask for some advise regarding our systems. We have a 2-node ES80 cluster currently running on OpenVMS v7.3-2. Last January 24, we upgraded the memory on the nodes on this cluster from 4GB to 8GB, and upgraded the OS from v7.3-1 to v7.3-2. Before the memory and OS upgrade, we did an autogen with feedback on the nodes. And after the upgrades, we did the autogen as well. However, now, it seems that our memory utilization have gone up, even with supposedly same loading before the memory upgrade. I have been investigating this using Performance Advisor and it seems that the majority of the memory allocation is being used by "IO cache". Can someone help me understand how we can reduce this IO cache allocation? Which SYSGEN parameter should we take to achieve this? Else, what benefits do we get if we just maintain this IO cache allocation?&lt;BR /&gt;&lt;BR /&gt;We have also observed some increase on the CPU utilization, but I still need to investigate further on this as well.&lt;BR /&gt;&lt;BR /&gt;I have attached a PSPA output of our node's CPU and memory utilization prior and after the memory and OS upgrade.&lt;BR /&gt;&lt;BR /&gt;We are having a scheduled downtime sometime this April and we would like to do the fine-tuning then.&lt;BR /&gt;&lt;BR /&gt;Thanks in advance to those who will help!</description>
      <pubDate>Thu, 22 Mar 2007 10:34:05 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/higher-memory-utilization-after-os-upgrade/m-p/3967059#M82690</guid>
      <dc:creator>roose</dc:creator>
      <dc:date>2007-03-22T10:34:05Z</dc:date>
    </item>
    <item>
      <title>Re: Higher memory utilization after OS upgrade</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/higher-memory-utilization-after-os-upgrade/m-p/3967060#M82691</link>
      <description>Free memory is a resource that can be used.&lt;BR /&gt;&lt;BR /&gt;I/O from cache is fast.&lt;BR /&gt;&lt;BR /&gt;I/O from disk -- even an EVA -- is slower.&lt;BR /&gt;&lt;BR /&gt;XFC is using available free physical memory as a (larger) I/O cache, as part of an on-going effort toward reducing the numbers of disk I/Os required and toward speeding application performance.&lt;BR /&gt;&lt;BR /&gt;As for changes in local application requirements, XFC is coded to transparently release this physical memory back, if and when your local applications require added physical memory.  &lt;BR /&gt;&lt;BR /&gt;XFC is using idle memory, and it is polite about its use.  XFC is borrowing this free and idle physical memory to speed your I/O.&lt;BR /&gt;&lt;BR /&gt;If the XFC cache hit rates don't support it, you can certainly configure XFC not to use the free memory.  Me?  I'd leave it, and I'd also look at enabling and utilizing RMS Global Buffers, and other related I/O optimizations.  I might well choose to toss more memory in, and allow XFC to grow its caches.  (I have not looked at your performance data, however.)&lt;BR /&gt;&lt;BR /&gt;Stephen Hoffman&lt;BR /&gt;HoffmanLabs&lt;BR /&gt;</description>
      <pubDate>Thu, 22 Mar 2007 10:43:54 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/higher-memory-utilization-after-os-upgrade/m-p/3967060#M82691</guid>
      <dc:creator>Hoff</dc:creator>
      <dc:date>2007-03-22T10:43:54Z</dc:date>
    </item>
    <item>
      <title>Re: Higher memory utilization after OS upgrade</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/higher-memory-utilization-after-os-upgrade/m-p/3967061#M82692</link>
      <description>ps: more CPU is expected when shoveling data through cache.  This is a trade-off, and using extra CPU and memory moves is usually preferable to disk I/O, in terms of aggregate application performance.&lt;BR /&gt;&lt;BR /&gt;Load up the T4 probes and establish a performance baseline.  From that, you can make decisions around tuning.  &lt;BR /&gt;&lt;BR /&gt;And remember that a CPU running near capacity in user mode, or a system with little free memory might not be a bad thing.  Tuning isn't as simple as it once was.  (If it was ever truely simple.)  &lt;BR /&gt;&lt;BR /&gt;The real question is around application performance, and around the current performance bottleneck.  And T4 can help here.&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Thu, 22 Mar 2007 10:48:29 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/higher-memory-utilization-after-os-upgrade/m-p/3967061#M82692</guid>
      <dc:creator>Hoff</dc:creator>
      <dc:date>2007-03-22T10:48:29Z</dc:date>
    </item>
    <item>
      <title>Re: Higher memory utilization after OS upgrade</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/higher-memory-utilization-after-os-upgrade/m-p/3967062#M82693</link>
      <description>Roose,&lt;BR /&gt;&lt;BR /&gt;you should NOT want to reduce memory utilisation by the cache!&lt;BR /&gt;&lt;BR /&gt;Roughly speaking, any data that has been brought in from disk also goes into that cache, (FIFO), and if another request for it is made before it is flushed... HEY, it IS already in memory! Map or copy it(whichever is appropriate), and we skipped the processing of and waiting for a disk transfer. &lt;BR /&gt;Effectively, you have bought IO performance which you paid for with memory.&lt;BR /&gt;&lt;BR /&gt;hth&lt;BR /&gt;&lt;BR /&gt;Proost.&lt;BR /&gt;&lt;BR /&gt;Have one on me.&lt;BR /&gt;&lt;BR /&gt;jpe</description>
      <pubDate>Thu, 22 Mar 2007 10:50:50 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/higher-memory-utilization-after-os-upgrade/m-p/3967062#M82693</guid>
      <dc:creator>Jan van den Ende</dc:creator>
      <dc:date>2007-03-22T10:50:50Z</dc:date>
    </item>
    <item>
      <title>Re: Higher memory utilization after OS upgrade</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/higher-memory-utilization-after-os-upgrade/m-p/3967063#M82694</link>
      <description>Roose,&lt;BR /&gt;&lt;BR /&gt;PSPA nicely graphs the IO cache in a different colour. If you look at all other usage of memory - aside from the XFC cache - and factor in the doubling of memory, I don't think your memory utilization has increased much.&lt;BR /&gt;&lt;BR /&gt;Utilizing much of the available memory is not a bad thing. XFC will release it's IO cache memory, if it's required for process workingsets. Over-utilizing memory only hurts, if it leads to increased paging and/or swapping. Do you see any significant paging IO rates ?&lt;BR /&gt;&lt;BR /&gt;Volker.</description>
      <pubDate>Thu, 22 Mar 2007 12:05:33 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/higher-memory-utilization-after-os-upgrade/m-p/3967063#M82694</guid>
      <dc:creator>Volker Halle</dc:creator>
      <dc:date>2007-03-22T12:05:33Z</dc:date>
    </item>
    <item>
      <title>Re: Higher memory utilization after OS upgrade</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/higher-memory-utilization-after-os-upgrade/m-p/3967064#M82695</link>
      <description>Roose,&lt;BR /&gt;&lt;BR /&gt;  You paid good money for that memory, so I'd hope that OpenVMS will use it and give you value for your hard earned dollars. Simply put, XFC will utilise any memory not in use by applications to improve your I/O performance. It will return memory to applications on demand.&lt;BR /&gt;&lt;BR /&gt;  The only downside, is all your performance monitoring screens look scary with memory utilisation (hopefully!)in the high 90's. Please consider this a very good thing and trust that OpenVMS will maximise your ROI on memory.&lt;BR /&gt;&lt;BR /&gt;  You may be able to tune the cache so it leaves some memory free (I don't know off hand how, because I can't imagine why anyone would want to do it!), but the net result will almost certainly be worse performance overall.</description>
      <pubDate>Thu, 22 Mar 2007 16:49:08 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/higher-memory-utilization-after-os-upgrade/m-p/3967064#M82695</guid>
      <dc:creator>John Gillings</dc:creator>
      <dc:date>2007-03-22T16:49:08Z</dc:date>
    </item>
    <item>
      <title>Re: Higher memory utilization after OS upgrade</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/higher-memory-utilization-after-os-upgrade/m-p/3967065#M82696</link>
      <description>Memory is cheap and plentiful these days. I'd suggest that in most modern configurations I/O is the main area in which performance gains and losses are made. You might want to use PSPA to compare your direct I/O and disk I/O queue-lengths before and after the change.&lt;BR /&gt;&lt;BR /&gt; Also, consider that page faulting and swapping are more significant indicators of memory shortage than utilisation. Although these symptoms can also result from a poorly tuned system, no matter how much memory is available.</description>
      <pubDate>Thu, 22 Mar 2007 19:29:57 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/higher-memory-utilization-after-os-upgrade/m-p/3967065#M82696</guid>
      <dc:creator>Martin Hughes</dc:creator>
      <dc:date>2007-03-22T19:29:57Z</dc:date>
    </item>
    <item>
      <title>Re: Higher memory utilization after OS upgrade</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/higher-memory-utilization-after-os-upgrade/m-p/3967066#M82697</link>
      <description>Martin Hughes wrote: &lt;BR /&gt;&lt;BR /&gt;&amp;gt;&amp;gt;&amp;gt;Memory is cheap and plentiful these days.  &lt;BR /&gt;&lt;BR /&gt;However, this is an ES80, and for that, even third party memory isn't cheap.&lt;BR /&gt;&lt;BR /&gt;That said, it is much better to let the memory be used by the cache that to have 50% in the free pool.&lt;BR /&gt;&lt;BR /&gt;Roose, did the users think the system was slower or faster after the upgrade?  Do jobs complete faster or slower.  Do you care if there is a higher CPU utilization or a lower free memory pool as long as the work is getting done? (hopefully faster than it was before the upgrade)</description>
      <pubDate>Thu, 22 Mar 2007 20:21:13 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/higher-memory-utilization-after-os-upgrade/m-p/3967066#M82697</guid>
      <dc:creator>Jon Pinkley</dc:creator>
      <dc:date>2007-03-22T20:21:13Z</dc:date>
    </item>
    <item>
      <title>Re: Higher memory utilization after OS upgrade</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/higher-memory-utilization-after-os-upgrade/m-p/3967067#M82698</link>
      <description>Thanks to those who took their time to answer my questions.&lt;BR /&gt;&lt;BR /&gt;With these inputs, I'll go ahead and dig deeper on analyzing the performance data of our systems specially on memory and CPU utilization. So far, I do am seeing a number of hard pagefaults taking place as well on our nodes, but fortunately, we have not received any complaints from the users yet concerning performance degradation. Unfortunately, we are unable to implement a process yet on how we can measure application response time and performance. Hopefully, once we are able to do this, I can further correlate the application metrics to our system performance much easier.&lt;BR /&gt;&lt;BR /&gt;Just one last question though, aside from XFC, are there any other component that I must look at that is contributing to our IO cache?</description>
      <pubDate>Fri, 23 Mar 2007 01:29:21 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/higher-memory-utilization-after-os-upgrade/m-p/3967067#M82698</guid>
      <dc:creator>roose</dc:creator>
      <dc:date>2007-03-23T01:29:21Z</dc:date>
    </item>
    <item>
      <title>Re: Higher memory utilization after OS upgrade</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/higher-memory-utilization-after-os-upgrade/m-p/3967068#M82699</link>
      <description>Roose,&lt;BR /&gt;&lt;BR /&gt;the memory shown as 'IO Cache' should all be used by XFC.&lt;BR /&gt;&lt;BR /&gt;$ SHOW MEM/CACHE&lt;BR /&gt;&lt;BR /&gt;Volker.</description>
      <pubDate>Fri, 23 Mar 2007 01:33:40 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/higher-memory-utilization-after-os-upgrade/m-p/3967068#M82699</guid>
      <dc:creator>Volker Halle</dc:creator>
      <dc:date>2007-03-23T01:33:40Z</dc:date>
    </item>
    <item>
      <title>Re: Higher memory utilization after OS upgrade</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/higher-memory-utilization-after-os-upgrade/m-p/3967069#M82700</link>
      <description>Volker,&lt;BR /&gt;&lt;BR /&gt;Thanks for your reply. Based on the command, it only has the XFC. &lt;BR /&gt;&lt;BR /&gt;On that command as well, I noticed that the Read hit rate of our XFC is just around 74% on one node, and 86% on another node. Does this mean that the caching is not effective on our nodes? If not, what are the ways we can make it more effective?</description>
      <pubDate>Fri, 23 Mar 2007 01:47:43 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/higher-memory-utilization-after-os-upgrade/m-p/3967069#M82700</guid>
      <dc:creator>roose</dc:creator>
      <dc:date>2007-03-23T01:47:43Z</dc:date>
    </item>
    <item>
      <title>Re: Higher memory utilization after OS upgrade</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/higher-memory-utilization-after-os-upgrade/m-p/3967070#M82701</link>
      <description>Roose,&lt;BR /&gt;&lt;BR /&gt;on both nodes in your cluster, you are saving about 100 Read-IOs/second through the use of the XFC cache. What else would your memory be used for, if not for the XFC cache ?&lt;BR /&gt;&lt;BR /&gt;You may also have noticed, that the IO load (read/write ratio) is quite different on those 2 systems. XFC is a generic method to cache disk data. Depending on the application, there may be even more specific methods to cache IO data (RMS global buffers etc.).&lt;BR /&gt;&lt;BR /&gt;Volker.</description>
      <pubDate>Fri, 23 Mar 2007 02:05:29 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/higher-memory-utilization-after-os-upgrade/m-p/3967070#M82701</guid>
      <dc:creator>Volker Halle</dc:creator>
      <dc:date>2007-03-23T02:05:29Z</dc:date>
    </item>
    <item>
      <title>Re: Higher memory utilization after OS upgrade</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/higher-memory-utilization-after-os-upgrade/m-p/3967071#M82702</link>
      <description>In contrast with what is being said here, I found in 2004 that XFC memory is NOT released when applications need it. Don't have the data anymore but perhaps it's better to test it.&lt;BR /&gt;&lt;BR /&gt;WIm</description>
      <pubDate>Fri, 23 Mar 2007 02:43:11 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/higher-memory-utilization-after-os-upgrade/m-p/3967071#M82702</guid>
      <dc:creator>Wim Van den Wyngaert</dc:creator>
      <dc:date>2007-03-23T02:43:11Z</dc:date>
    </item>
    <item>
      <title>Re: Higher memory utilization after OS upgrade</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/higher-memory-utilization-after-os-upgrade/m-p/3967072#M82703</link>
      <description>You can use PSPA to find out which processes are generating the hard page faults. This may simply be the result of some processes with insufficient working set quota's.&lt;BR /&gt;&lt;BR /&gt; Have you run AUTOGEN since the time you did it immediately after the upgrade? I.E. has it been run in the new configuration after a period of load?.</description>
      <pubDate>Fri, 23 Mar 2007 03:24:33 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/higher-memory-utilization-after-os-upgrade/m-p/3967072#M82703</guid>
      <dc:creator>Martin Hughes</dc:creator>
      <dc:date>2007-03-23T03:24:33Z</dc:date>
    </item>
    <item>
      <title>Re: Higher memory utilization after OS upgrade</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/higher-memory-utilization-after-os-upgrade/m-p/3967073#M82704</link>
      <description>Tested it on 7.3.&lt;BR /&gt;&lt;BR /&gt;Workstation with 256 MB. VCC_MAX on -1 thus 128 MB is the max size.&lt;BR /&gt;&lt;BR /&gt;Allocated 200 MB and made it dirty.&lt;BR /&gt;&lt;BR /&gt;Again and again. At the end the pagefile was full but XFC still used 12 MB allocated(started the test with 77 MB).&lt;BR /&gt;&lt;BR /&gt;Also read this &lt;A href="http://forums1.itrc.hp.com/service/forums/questionanswer.do?threadId=533755" target="_blank"&gt;http://forums1.itrc.hp.com/service/forums/questionanswer.do?threadId=533755&lt;/A&gt;&lt;BR /&gt;&lt;BR /&gt;BTW : I ran the malloc again forr 5 MB and the system went into hang.&lt;BR /&gt;&lt;BR /&gt;Wim</description>
      <pubDate>Fri, 23 Mar 2007 03:36:59 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/higher-memory-utilization-after-os-upgrade/m-p/3967073#M82704</guid>
      <dc:creator>Wim Van den Wyngaert</dc:creator>
      <dc:date>2007-03-23T03:36:59Z</dc:date>
    </item>
    <item>
      <title>Re: Higher memory utilization after OS upgrade</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/higher-memory-utilization-after-os-upgrade/m-p/3967074#M82705</link>
      <description>If you use PSPA to look for processes with high pagefaults, check the for number of image activations, as pages from the image file have to get faulted in.  In the last version of PSPA I used (prior to being sold to CA), there was column IMGCNT that indicated number of image activations.  Even with sufficient working set limits, image activations will cause page faults.</description>
      <pubDate>Fri, 23 Mar 2007 04:47:11 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/higher-memory-utilization-after-os-upgrade/m-p/3967074#M82705</guid>
      <dc:creator>Jon Pinkley</dc:creator>
      <dc:date>2007-03-23T04:47:11Z</dc:date>
    </item>
    <item>
      <title>Re: Higher memory utilization after OS upgrade</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/higher-memory-utilization-after-os-upgrade/m-p/3967075#M82706</link>
      <description>Wim Van den Wyngaert describes problems with XFC in v7.3.&lt;BR /&gt;&lt;BR /&gt;However, there were fixes since then.  See release notes for 7.3-2.&lt;BR /&gt;&lt;BR /&gt;Wim, have you reproduced your test in 7.3-2?&lt;BR /&gt; &lt;BR /&gt;Jon</description>
      <pubDate>Fri, 23 Mar 2007 04:53:13 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/higher-memory-utilization-after-os-upgrade/m-p/3967075#M82706</guid>
      <dc:creator>Jon Pinkley</dc:creator>
      <dc:date>2007-03-23T04:53:13Z</dc:date>
    </item>
    <item>
      <title>Re: Higher memory utilization after OS upgrade</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/higher-memory-utilization-after-os-upgrade/m-p/3967076#M82707</link>
      <description>I'm the XFC maintainer.&lt;BR /&gt;&lt;BR /&gt;First, get the latest XFC remedial for V7.3-2.&lt;BR /&gt;I just finished testing V4 for V7.3-2 and it&lt;BR /&gt;should be available within the next week or&lt;BR /&gt;so. ( I'm not sure whether it is available without prior version support, if not V3 is&lt;BR /&gt;fine).  There are lots of performance &lt;BR /&gt;improvements and bugfixes over the original&lt;BR /&gt;V7.3-2 release.  &lt;BR /&gt;&lt;BR /&gt;On most systems (particularly large memory&lt;BR /&gt;systems like these), XFC works fine out of&lt;BR /&gt;the box and you shouldn't have to worry about&lt;BR /&gt;tuning it.&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;In particular, the memory trimming code has&lt;BR /&gt;been vastly improved (actually rewritten).&lt;BR /&gt;A recent fix was to eliminate thrashing when&lt;BR /&gt;memory reclamation is happening.&lt;BR /&gt;We also do a better job of detecting &lt;BR /&gt;not enough memory at boot time (here I&lt;BR /&gt;mean 32MB or so).  No matter what, XFC needs&lt;BR /&gt;memory to work and allocates about 4MB &lt;BR /&gt;permanently at boot time and even if &lt;BR /&gt;constrained may allocate more to prevent&lt;BR /&gt;hanging because of deadlock (I think that I&lt;BR /&gt;have all those fixed).&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;Low memory is not at all a problem on these&lt;BR /&gt;two systems.&lt;BR /&gt;&lt;BR /&gt;In general, the hit rates on these two &lt;BR /&gt;systems look reasonable.  It is interesting &lt;BR /&gt;that node S1A01 has over 80% writes.  This &lt;BR /&gt;is very unusual - I'm a little curious about &lt;BR /&gt;the type of load here.  The read cache &lt;BR /&gt;actually helps write performance since the&lt;BR /&gt;read I/Os aren't competing for bandwidth.&lt;BR /&gt;&lt;BR /&gt;Low IO hits rates (e.g. &amp;lt; 30%) are not &lt;BR /&gt;necessarily a sign of poor cache performance.&lt;BR /&gt;We have seen some systems where the IO hit&lt;BR /&gt;rate was low, but the block hit rate was&lt;BR /&gt;high (30% for the former and over 70% for&lt;BR /&gt;the later).  The reason being an application&lt;BR /&gt;doing 3 block reads with almost a zero hit&lt;BR /&gt;rate.  The larger IOs were for the most part&lt;BR /&gt;being satisfied out of cache.  The latest&lt;BR /&gt;versions of XFC are now tracking this data,&lt;BR /&gt;both in aggregate and in time series.  &lt;BR /&gt;&lt;BR /&gt;Since the cache on S1A01 has not grown to &lt;BR /&gt;full size, I'm guessing that this system &lt;BR /&gt;is more memory constrained and XFC is either&lt;BR /&gt;being trimmed or is not expanding.  &lt;BR /&gt;&lt;BR /&gt;Mark Hopkins&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Fri, 23 Mar 2007 19:38:04 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/higher-memory-utilization-after-os-upgrade/m-p/3967076#M82707</guid>
      <dc:creator>Mark Hopkins_5</dc:creator>
      <dc:date>2007-03-23T19:38:04Z</dc:date>
    </item>
  </channel>
</rss>

