Operating System - OpenVMS
cancel
Showing results for 
Search instead for 
Did you mean: 

Increased overhead on RMS Global Buffers?

 
SOLVED
Go to solution
EdgarZamora_1
Respected Contributor

Re: Increased overhead on RMS Global Buffers?

Thank you ALL for the great responses, most of which answered my question.

In response to Jon (and Jan),
>>>
At any rate, if the file is being shared for update, you are almost certainly better off with global buffers.

Have you done some tests of elapsed time when there are multiple writers to the same file?

to specify "almost certainly":
On 8.3 (with XFC enabled) GLOBAL buffers start being advantaguous AS SOON AS the file is accessed SIMULTANUOUS with at least one of those accesses with UPDATE intent.
(and if you think about it, that is exactly what logic would predict)
<<<

It seems that running with global buffers increases my elapsed time a little bit (single job, and two jobs concurrently, I haven't tested with more). I expected some overhead with the runtime/elapsed time so I'm not concerned about this. I expect the savings to be in system resources. My test jobs are report jobs though (readers, not writers) so I really didn't answer Jon's specific question. In production there is a contantly running job (24X7) that updates the file (as transactions are entered by users), and then a bunch of users run the report programs (during the day) against the file.

In response to Hein..
>>> But most importantly, you can expect to have obtained a significant lock traffic reduction which is a scares (sp?), serializing, resource. <<<

(scarce, btw) I already realized HUGE lock traffic reduction through implementing NQL (thanks greatly for your previous help on that one!)

>>>
You failed to indicate HOW MANY global buffer were applied. With hundreds of IOs/sec you may want to go aggresive. Think THOUSANDS, not hundreds.
Mind you... when RMS counts IO, those are IO requests which are not unlikely to have been resolved by the XFC. So the actual IO rate to the disk might not have changed.
I pretty much only expect LOCK and KERNEL mode reduction, which out to be significant at 65K/second. So as I opened, check to for T4, hotfile or rms_stats detail infor to see the real gain.
The end user impact may have been surprisingly low, ut you may have help the system as a whole a lot.
<<<

I started with Global Buffers of 128 because I wasn't sure on how it would affect my GBLSECTIONS and GBLPAGES. Now I see that I can increase it A LOT MORE. If I'm not mistaken, user working set sizes could be affected, right?... I better make sure I increase them if needed.

Also, I am using a couple of your tools already. Thank you so much for those!



Jan van den Ende
Honored Contributor

Re: Increased overhead on RMS Global Buffers?

Edgar,

>>>
Have you done some tests of elapsed time when there are multiple writers to the same file?
<<<

Well, in one (maybe extreme) case, an application wtih multiple keyed RMS files had a batch that was triggered about trice a week. It ran 45 - 60 minutes (during multiole interactive use).
The main file dat 11 (eleven) mostly segmented indexes.
I enabled 1000 global buffers for it, and I got "complaints" from the applic manager.
The job "failed", because it ran only 3 minutes. But all the work was done...

As I said, maybe an extreme example, but it DOES show the potential.

Proost.

Have one on me.

jpe
Don't rust yours pelled jacker to fine doll missed aches.
Hein van den Heuvel
Honored Contributor

Re: Increased overhead on RMS Global Buffers?

>> (scarce, btw)

Right. I _knew_ that! :-)

>> I already realized HUGE lock traffic reduction through implementing NQL (thanks greatly for your previous help on that one!)

Well, Global buffer will likely reduce lock activity by that same improvement once over.
It you liked it the first time... do it once more.
The reason for this is that even though NQL truly stops record lock, RMS still does bucket locks: Get bucket lock, read or 'find' bucket in cache, find record in bucket, release bucket.
With global buffers, and only with global buffer will RMS cache the bucket lock in CR (Concurrent Read) mode and only grab a new lock when the requests mode to a new (next) bucket. That's one ENQ and one DEQ saved per 'second' GET in a bucket, as well as that final bucket lock to find out there are no more records.

Global buffers and write access??
Well, you need to find where to write, for each of the keys. Typically that is 2 or 3 index bucket reads per key. Those can potentially all be resolved in CR mode from the global buffer cache.

>> Also, I am using a couple of your tools already. Thank you so much for those!

So you owe me a beer or two! For now just send me an Email an I may have an update or two.

Cheers,
Hein.