Operating System - OpenVMS
1748027 Members
4205 Online
108757 Solutions
New Discussion юеВ

SET RMS_DEFAULT best practices

 
Art Wiens
Respected Contributor

SET RMS_DEFAULT best practices

We will have three new two node clusters (soon!). Each cluster will be either two ES47's 4 cpus and 8G memory ea or two ES47's 2 cpu's and 4G memory ea. ie. should be lots of resources available.

What factors should be taken into account when "tuning" these? Or is there a reason to "play" with them with the above given resources? Just max them out?

Cheers,
Art
7 REPLIES 7
Robert Gezelter
Honored Contributor

Re: SET RMS_DEFAULT best practices

Art,

My standard recommendation is to set a global system level policy, and then override it on a group-by-group basis as needed.

One of the problems is that the process-level RMS parameters are not propagated to sub-processes.

If you application environment is not affected by this, or everything is command file driven, it is possible to create a standardized convention for invoking the settings. This can be a great benefit for performance, as I noted in my DECUS session on Applications Performance (If interested, please let me know, the slides are apparently among the backlog of sessions that I have not posted to my www site).

In particular, I found that extension size, buffering, and blocking can have a major impact, provided that programs do not override the settings within the code (obviously a practice that I discourage).

- Bob Gezelter, http:/www.rlgsc.com

Hein van den Heuvel
Honored Contributor

Re: SET RMS_DEFAULT best practices


Hello Art,

The RMS defaults are very conservative.
The only tweaks they received over 20+ years is an internal default for the number of indexed file buffers from 2 to "deepest index +2" in VMS V5.4 and the sequential file default buffers size from a puny 16 blocks to a small 32 block in V7-ish timeframe.

Here is what I would suggest for SYSTEM level defaults today:
/EXTEND=2048 (or 20000)
/SEQ/BLOC=64/BUF=4
/IND/BUFF=20

Do NOT, ever, set the system defaults to more than 10 buffers for sequential files... I've seen that happen and it is ridicoulous to put it mildy.

The optimal settings are NOT process, sub-process, user or user-group based. They strictly depend on the IMAGES being run in the process.

Just about the only thing you can do better than system wide defaults is a minor per process tweak in (SY)LOGIN.COM.
Give the BATCH jobs more and larger buffers.

But really, for a batch job it behooves the job developer to perform SET RMS commands inline, just before the programs being run.

And really, really, really the programs should take control and select the right defaults on a per file basis in the code, tedious as that may sound.

Only the program knows whether it will read or write, sequential or randomly, repeatedly on single-shot. All those behaviours define the right buffer choices.

So I encourage the programs overriding the defaults, but it can not be done willy-nilly.

A program developped 20 years ago and selecting 2 buffers each 32 blocks may have improved the overall optimally back then, but can be holding back now that more memory and better IO is available.

Only last week I found one where the program put in a 'generous' extent of 500 blocks. Well, the file is millions of blocks these days, so the 500 which used to help now hinder. So it is a fine line between smart and overly smart and I can appreciate others voting for leaving it out of programs, knowing programs tend to end up being frozen or hard to modify.

I could envision a comment in the code for one program saying "selecting 2 buffers only for this indexed file because a) we know there are global buffers or b) we know we will only locate (write) one record end never come back.
An other program might indicate 'selecting 50 buffers' because we will be back and we use exclusive access so gloabl buffers do not count'

Side-line: Readers.... Is this a topic that deserves 5 or 15 minutes on a bootcamp session? Please let me know by Email. Other RMS please tell us about or don't bother with comments are also welcome.

hth,
Hein van den Heuvel ( at Gmail dot com )
HvdH Performance Consulting

John Gillings
Honored Contributor

Re: SET RMS_DEFAULT best practices

Art,

Tuning and optimization are by definition, highly workload specific. You can't "optimize" for everything, so be very wary of changing the system default defaults, without deep understanding of your whole workload.

I certainly agree with Hein about extend quantities. The last lot of disks I bought worked out at $A0.25c per GB, which makes tiny extents a very false economy.

On the other hand, I'd warn against "maxing out" the RMS defaults. For example, don't set any system wide non-zero value for sequential disk. This has very bad consequences for batch jobs. It goes like this... every batch job opens at least two process permanent sequential disk files SYS$INPUT and SYS$OUTPUT. Since they're process permanent, all RMS structures, including buffers must reside in process permenent address space - with turns out to be your Process Dynamic Memory Area in P1 space (see SHOW PROCESS/MEMORY). Since this is finite and limited (controlled by SYSGEN parameter CTLPAGES), it's very easy to fill it.

At a surprisingly low number of buffers for Sequential Disk, new batch jobs simply cannot open input and/or log files, and therefore cannot start. Since you don't get a log file, it can be tricky to work out why. If you believe a non zero value is appropriate for your other workload, leave the system default value at 0, and use SYLOGIN to set a process value.
A crucible of informative mistakes
Guenther Froehlin
Valued Contributor

Re: SET RMS_DEFAULT best practices

I did a very innocent change for a cluster in the late 80s. I reved up the RMS parameters in one simple LOGIN.COM file. This was the login for the application of an online booking system for a charter airline. There was one process per connected travel agent. Within the blink of an eye the systems ran out of page file space.

Turned out ca. 100 processes, ca. 100 files open per process. Any RMS prameter change to add one extra page of virtual memory added another load of ca. 10,000 blocks to the page files (a bit less because of global buffers). Once the processes extended their working sets and real paging started they went into modified page writer waits and finally the page files were overcomitted.

And my changes added more than just one page per file open...

So be careful, there are side effects.

/Guenther
Robert Gezelter
Honored Contributor

Re: SET RMS_DEFAULT best practices

Guenther,

Yes. I am sure that John, Hein, and I, no matter what our respective opinions on what are useful settings in a variety of areas, would recommend care about calculating the side effects in a variety of dimensions, including resource consumption.

RMS global buffers can also have a large effect on the resource consumption (or lack thereof).

- Bob Gezelter, http://www.rlgsc.com
John Gillings
Honored Contributor

Re: SET RMS_DEFAULT best practices

re: Robert.... CROSS PURPOSES ALERT!!!

>RMS global buffers can also have a large
>effect on the resource consumption (or >lack thereof).

Robert, the question was not about RMS GLOBAL BUFFERS, it was about SET RMS_DEFALTS. See $ SHOW RMS. This has nothing to do with RMS GLOBAL BUFFERS, it controls local buffers, extend quantities and network block counts, potentially across the whole system.

I've quoted Hein many times - "There is only one WRONG number of global buffers on a file - 0!". Yes, Global buffers should almost always be enabled, and will rarely have negative consequences, mostly because you're applying them file by file. It's fairly easy to see the extent of influence.

RMS_DEFAULTS are a somewhat different beast and need to be treated VERY carefully, because they can affect ALL files across the system. As Guenther's example demonstrates, it can be like "butterfly wings".
A crucible of informative mistakes
Robert Gezelter
Honored Contributor

Re: SET RMS_DEFAULT best practices

John,

My comment about global buffers was meant to the extent that they drive resource usage the other way.

It was not meant as more than an aside to the original topic.

- Bob Gezelter, http://www.rlgsc.com