Operating System - OpenVMS
1748214 Members
3169 Online
108759 Solutions
New Discussion юеВ

Re: Higher memory utilization after OS upgrade

 
roose
Regular Advisor

Re: Higher memory utilization after OS upgrade

Volker,

Thanks for your reply. Based on the command, it only has the XFC.

On that command as well, I noticed that the Read hit rate of our XFC is just around 74% on one node, and 86% on another node. Does this mean that the caching is not effective on our nodes? If not, what are the ways we can make it more effective?
Volker Halle
Honored Contributor

Re: Higher memory utilization after OS upgrade

Roose,

on both nodes in your cluster, you are saving about 100 Read-IOs/second through the use of the XFC cache. What else would your memory be used for, if not for the XFC cache ?

You may also have noticed, that the IO load (read/write ratio) is quite different on those 2 systems. XFC is a generic method to cache disk data. Depending on the application, there may be even more specific methods to cache IO data (RMS global buffers etc.).

Volker.
Wim Van den Wyngaert
Honored Contributor

Re: Higher memory utilization after OS upgrade

In contrast with what is being said here, I found in 2004 that XFC memory is NOT released when applications need it. Don't have the data anymore but perhaps it's better to test it.

WIm
Wim
Martin Hughes
Regular Advisor

Re: Higher memory utilization after OS upgrade

You can use PSPA to find out which processes are generating the hard page faults. This may simply be the result of some processes with insufficient working set quota's.

Have you run AUTOGEN since the time you did it immediately after the upgrade? I.E. has it been run in the new configuration after a period of load?.
For the fashion of Minas Tirith was such that it was built on seven levels, each delved into a hill, and about each was set a wall, and in each wall was a gate. (J.R.R. Tolkien). Quote stolen from VAX/VMS IDSM 5.2
Wim Van den Wyngaert
Honored Contributor

Re: Higher memory utilization after OS upgrade

Tested it on 7.3.

Workstation with 256 MB. VCC_MAX on -1 thus 128 MB is the max size.

Allocated 200 MB and made it dirty.

Again and again. At the end the pagefile was full but XFC still used 12 MB allocated(started the test with 77 MB).

Also read this http://forums1.itrc.hp.com/service/forums/questionanswer.do?threadId=533755

BTW : I ran the malloc again forr 5 MB and the system went into hang.

Wim
Wim
Jon Pinkley
Honored Contributor

Re: Higher memory utilization after OS upgrade

If you use PSPA to look for processes with high pagefaults, check the for number of image activations, as pages from the image file have to get faulted in. In the last version of PSPA I used (prior to being sold to CA), there was column IMGCNT that indicated number of image activations. Even with sufficient working set limits, image activations will cause page faults.
it depends
Jon Pinkley
Honored Contributor

Re: Higher memory utilization after OS upgrade

Wim Van den Wyngaert describes problems with XFC in v7.3.

However, there were fixes since then. See release notes for 7.3-2.

Wim, have you reproduced your test in 7.3-2?

Jon
it depends
Mark Hopkins_5
Occasional Advisor

Re: Higher memory utilization after OS upgrade

I'm the XFC maintainer.

First, get the latest XFC remedial for V7.3-2.
I just finished testing V4 for V7.3-2 and it
should be available within the next week or
so. ( I'm not sure whether it is available without prior version support, if not V3 is
fine). There are lots of performance
improvements and bugfixes over the original
V7.3-2 release.

On most systems (particularly large memory
systems like these), XFC works fine out of
the box and you shouldn't have to worry about
tuning it.


In particular, the memory trimming code has
been vastly improved (actually rewritten).
A recent fix was to eliminate thrashing when
memory reclamation is happening.
We also do a better job of detecting
not enough memory at boot time (here I
mean 32MB or so). No matter what, XFC needs
memory to work and allocates about 4MB
permanently at boot time and even if
constrained may allocate more to prevent
hanging because of deadlock (I think that I
have all those fixed).


Low memory is not at all a problem on these
two systems.

In general, the hit rates on these two
systems look reasonable. It is interesting
that node S1A01 has over 80% writes. This
is very unusual - I'm a little curious about
the type of load here. The read cache
actually helps write performance since the
read I/Os aren't competing for bandwidth.

Low IO hits rates (e.g. < 30%) are not
necessarily a sign of poor cache performance.
We have seen some systems where the IO hit
rate was low, but the block hit rate was
high (30% for the former and over 70% for
the later). The reason being an application
doing 3 block reads with almost a zero hit
rate. The larger IOs were for the most part
being satisfied out of cache. The latest
versions of XFC are now tracking this data,
both in aggregate and in time series.

Since the cache on S1A01 has not grown to
full size, I'm guessing that this system
is more memory constrained and XFC is either
being trimmed or is not expanding.

Mark Hopkins