1828149 Members
2686 Online
109975 Solutions
New Discussion

Re: Working sets size

 
Wim Van den Wyngaert
Honored Contributor

Working sets size

I have made a procedure that calculates the sum of all working sets. I ran it on one of my systems hoping to find the number of pages wasted because of wsdefault settings that are to high.

1) Why is the sum of all working sets 2 times the real memory used (even after eliminating global memory)
2) I find a certain number of pages that are assigned to working sets but that are not used. Are they wasted or simply not really assigned ?
3) Does someone have a script that makes a real map of physical memory with indications of the usage ?

WIm
Wim
16 REPLIES 16
Wim Van den Wyngaert
Honored Contributor

Re: Working sets size

Note that the script is in the enclosure ^^^
Wim
Jan van den Ende
Honored Contributor

Re: Working sets size

Wim,

1) WorkingSet is just the available amount of pages that _MIGHT_ be pointed to.
This includes:
-any shared page (system code, shared image code) that is counted each time for any proces that points to it
-any pages in pagefile(s), so not in physical mem
-any allocated pointers that are not (yet) in use.

2) what exactly do you mean? Pages assigned to workingsets but not used?
_IF_ there is a pointer to a page it is at least available for use. What do you mean by 'not used'? Read-only use is also use, and I tend to say that even being pointed to should be considered being used (well, maybe except for being pointed to by the free list).

3) considering various caching aspects, and even the 'stickiness' of pages already given to the free list but not-yet reused, that looks like a really complex task!!

-- I readily give my view for any better!! --

Jan
Don't rust yours pelled jacker to fine doll missed aches.
Wim Van den Wyngaert
Honored Contributor

Re: Working sets size

Jan

1) WorkingSet is just the available amount of pages that _MIGHT_ be pointed to.
*** WSSIZ returns the real working set size, not the assigned
*** working set size that normally is to big
This includes:
-any shared page (system code, shared image code) that is counted each time for any proces that points to it
-any pages in pagefile(s), so not in physical mem
*** in that case the whole pagefile should be in memory !!!
-any allocated pointers that are not (yet) in use.

2) what exactly do you mean? Pages assigned to workingsets but not used?
_IF_ there is a pointer to a page it is at least available for use. What do you mean by 'not used'? Read-only use is also use, and I tend to say that even being pointed to should be considered being used (well, maybe except for being pointed to by the free list).
*** the process receives wsdefault as working set size but uses only
*** half of it. E.g. when wsdef=20000

3) considering various caching aspects, and even the 'stickiness' of pages already given to the free list but not-yet reused, that looks like a really complex task!!
*** I agree but I would be very happy with it.

-- I readily give my view for any better!! --

Wim
Wim Van den Wyngaert
Honored Contributor

Re: Working sets size

Because of low response : let me refrase.

Is memory of the working set that is not used lost ?
E.g. wsdefault is 10000 and 2000 is used. Is 8000 lost ?

Wim
Wim
David B Sneddon
Honored Contributor

Re: Working sets size

Wim,

May I suggest the "OpenVMS Perfomance Management"
manual... chapter 3 in particular deals with
memory management and will likely answer your
questions and address any concerns you have about
memory being "wasted".

Dave
Hein van den Heuvel
Honored Contributor

Re: Working sets size

2) As mentioned, wssize is just how much you could get. This is the size of your bag which will limit how many marbles you can play with at the most. Now if you do not fill the bag, then you did not loose them, you just did not gain them yet. Nothing is lost. That bag is just not filled.

1A) Now as you add the sizes of all the bags of all the players you may well find that this is 2x larger then the total number of marbles in the game. This is fine as long as you do not expect all players to fill their bags at the same time.

1B) it gets tricky with global valid pages (shared libraries, images, global sections). There a single marble lives in multiple bags. I can not find a nice analogy for that. Perhaps a team sharing marbles?.

Anyway you are on the right track as long as you focus on "ppgcnt", "gpgcnt". The ppgcnt is the easy one. Those pages are used by that process. Period. There is nothing much smart you can do about gpgcnt though. Process a, b, and c might each have 1000 global valid pages but there is no telling (not easily) whether those are in fact the same 1000 pages or in fact 3000 _potentially_ shared pages, but not actually shared. All you know is 'no less than 1000, no more than 3000'.
I think you must just ignore them when trying to calculate used memory. To count the shared memory you'll probably need to use information from INSTALL LIST/GLOBAL.

In your script you are creating two bogus variables (IMHO!): 'tot' and 'totg'.
Well, they are not bogus. The represent a maximum (over)commitment level. But they are not the count of real things.

3) easy! Just trust VMS! Used pages = Total physical pages minus free pages. How could it not be? Within the used pages you can that differentiate between the several known groups like:
- the sum off all ppgcnt over all processes
- the modified page list size
- the system pages.
shared pages is probably what it left:
globc = phy - free - procc - mod - system

hth,
Hein.
John Gillings
Honored Contributor

Re: Working sets size

Wim,

I'm not sure what you're trying to achieve here. Doesn't SHOW MEMORY give you enough information? As Hein says, the unused part of your working set is not wasted, it's just unused. If you're not experiencing page thrashing, there's no benefit in decreasing working set sizes. It's much better to have some space to expand into than to have to expand working sets by paging.

Another potential source of errors in your sum is changes in processes as you scan across the system. On a system with more than a few hundred processes, you're almost guaranteed that there will be significant differences in working sets between start and end of the scan, so any results are likely to be meaningless. If the system is memory constrained, the scan itself WILL affect the results even more, especially if touching a process causes it to inswap.

I'm not aware of anything that maps PHYSICAL memory. You can map some of virtual memory for individual processes - while in SHOW PROCESS/CONTINUOUS type V for a page map. Increasing the size of your terminal width and length will increase how much you see. Pages are tagged as "*" (local), "G" (global) and "L" (locked).
(I have a modified version of SHOW PROCESS/CONTINUOUS that can scroll and zoom around virtual memory, but it only knows about VAX working set structures and I've never found the time to port it to Alpha - there's also the non-trivial issue of how to deal with the vastly increased VM space).

If you want to play, you could scan the PFN data base yourself, but it's a lot of work (and very dangerous!). Remember that it's a moving target, so if you want an "accurate" results you'll need to lock the data base, which is unlikely to be good for performance! Is it worthwhile? That depends on what you're really trying to achieve.
A crucible of informative mistakes
Wim Van den Wyngaert
Honored Contributor

Re: Working sets size

Conclusion :

Q>Is memory of the working set that is not used lost ? E.g. wsdefault is 10000 and 2000 is used. Is 8000 lost ?

A>It is free memory but if all processes claim their unused memory some working set trimming will be done.
Wim
John Gillings
Honored Contributor

Re: Working sets size

Wim,
> Is 8000 lost ?
>A>It is free memory but if all processes
>claim their unused memory some working set
>trimming will be done.

This conclusion is NOT correct. This isn't "free" or "unused" memory. It isn't anything! It's just a number. The "excess" doesn't correspond to any allocation, it represents the number of pages you may add to your working set without having to increase the working set size.

The working set has 3 "limits" - WSDEFAULT, WSQUOTA and WSEXTENT. The current number of pages in the working set size will be somewhere in the range WSDEFAULT to WSEXTENT. Exactly where influences what happens if you attempt to access more pages, and how the system recovers pages if memory gets tight.
A crucible of informative mistakes
Rebecca Putman
Frequent Advisor

Re: Working sets size

Wim, you need a copy of Hitchhiker's Guide to VMS by Bruce Ellis. Go to http://www.amazon.com/exec/obidos/tg/detail/-/1878956000/002-7524293-4356860?v=glance&vi=reviews to see/buy a copy. :)
Wim Van den Wyngaert
Honored Contributor

Re: Working sets size

Rebecca : I have an old one : VAX/VMS internals and data structures 5.2. But it helped checking it.

My confusion was based upon the WSSIZE parameter of the working set. It is NOT the working set size but the working set limit.
As John said, the number of pages you can add to the working set without asking to increase your working set. So, it is not allocated (I would call it free but John wouldn't).

Now, since all processes can take these pages without question and since the the sum of all wssize values is 2 times the available memory, what would happen if ALL processes asked their pages ? Not working set trimming because they are all not exceeding wsdefault. Outswapping ?

Wim
Wim
Wim Van den Wyngaert
Honored Contributor

Re: Working sets size

Tested it. The processes get outswapped. The used size of the process working set is reduced to 512 pages.

I also noticed that the extended file cache is not decreased in size to make memory available.

And I also noticed on my alphastation that when you create 40 looping processes at prio 3, even prio 10 and above hardly get the cpu.

Wim
Wim
Lawrence Czlapinski
Trusted Contributor

Re: Working sets size

Guide to OpenVMS Performance Management is helpful for starters. My hard copy has a Sample WORKSET.COM procedure. I made some modifications to it. I set the term width greater than 80 characters (SET TERM/WIDTH=xxx)in the modified version and I made some of the field widthes wider.
1. As previously stated, working set size is the current limit of memory for a process. It means that as long as memory is available the process can expand it's working set total pages (working set count)up to that amount. The total of working set sizes doesn't mean much. As you said, the sum of the working set sizes on your system is 2 times the real memory used (even after eliminating global memory). What's more important is what percentage of memory is being used. I like to have our systems running under 80%, 80% or more of the time. Keep in mind that the higher the percent used the more likely you are to experience a lot of page swapping if you get an additional memory load which could cause excessive page swapping.
You might prefer a different percentage.
First, I would recommend running AVAIL_MAN so that you can see what your normal memory usage range is. I prefer to have memory usage under 80% most of the time. That way there is room for memory spikes. Of course, you can do a show memory and calculate the memory usage from it. The drawbacks are that your process is using up memory and you don't get a picture over time of memory usage. You could write a DCL procedure to calculate multiple samples but still not as useful as AVAIL_MAN or a program that gives you similar info.
WSDEF and WSQUOTA do make a difference. The process priority also makes a difference. If the process, has a working set size above, WSQUOTA it can be trimmed back by SWAPPER. If the WSDEFs or WSQUOTAs are too high, you can wind up with a lot of task swapping. You want to balance the needs of the processes. If you have lots of memory available, you can be more lenient with WSDEFs and WSQUOTAs.
Also DECWindows processes will tend to accumulate more working set size over time.
You want tasks to have a reasonable WSDEF and WSQUOTA. It is beneficial to monitor memory usage and again AVAIL_MAN is a good tool for monitoring a group of systems over time all at once. MONITOR SHOW PROCESS/TOPFAULT is also useful for getting an idea of which processes are faulting the most. You may want to increase the /INTERVAL=x, where x is seconds. Otherwise you may not be able to retain the information. Some processes have sufficent memory but page fault because of data changes. DECWindows processes fit this category. I prefer to have low WSDEF's for interactive users. That way if they are sitting at the DCL prompt, they aren't using a lot of memory for that process. The DECwindow processes will tend to keep whatever memory they acquire unless they are above WSQUOTA and a higher priority task needs the memory.
2. Think of the working set size as a list of potential pages. Total pages used = globalpages + processpages. Page slots above the total pages used aren't assigned yet. So they aren't using memory pages.
3. No.

Lawrence
Wim Van den Wyngaert
Honored Contributor

Re: Working sets size

Lawrence,

The answer was already given.
One should try to use 100% of memory, not 80. VMS still stands for Virtual Memory System. So, the 100% is not a barrier and databases do perform better when they have more memory.

Wim
Wim
Ian Miller.
Honored Contributor

Re: Working sets size

bare in mind the use of the free and modified list as caches. There can be a detrimental affect if they are too small.
____________________
Purely Personal Opinion
Lawrence Czlapinski
Trusted Contributor

Re: Working sets size

1. I should have said on VAXes. On an Alpha, you can use most of your memory without having trashing (excessive hard page faults) since the Alphas can handle a lot higher DIO rate.
2. I know it's VM.
3. VAXes: If you're using close to 100% of memory on a VAX, you can get a lot of hard page faulting or outswapping which can actually slow your performance down as processes have to wait for pages to be read from disk. If you're using 100% of memory and the demand for memory increases overtime due to added system loads, system performance can fall off the click due to hard page faults. Sometimes our memory usage spikes for awhile and thats ok. If your memory usage is always near 100%, the first warning you may get is when your users start complaining about poor performance and everyone starts calling the sys admin (in my case, me). If there are performance problems, I get called directly 24X7. No thanks!