Operating System - OpenVMS
cancel
Showing results for 
Search instead for 
Did you mean: 

Increased overhead on RMS Global Buffers?

SOLVED
Go to solution
EdgarZamora_1
Respected Contributor

Increased overhead on RMS Global Buffers?

Please excuse my limited understanding of RMS global buffers. I'm just starting to tinker with it.

Before I put global buffers on a particular indexed file, the MONITOR RMS /FILE= /ITEM=CACH shows "Loc Buf Cache Attempt Rate" (with hit percent around 99%) and shows zero "Glo Buf Cache Attempt Rate". After I added global buffers to the file, I now see global buffer attempt rate (close to 99% hit rate). However, I still see almost the same amount of Local Buffer cache attempt rate (with hit rate of 0%). So now the total attempt rate (local + global) seems double what it was before global buffers was added. What seems suspicious to me is that the local and global attempt rates are numerically very close to each other. I suspect this is due to the nature of the report program running against the file, but I haven't looked at the code yet. I was wondering if someone had a ready explanation for this type of behavior.

(attached is monitor rms output)

OVMS Alpha 8.3 with fairly recent patches.
12 REPLIES
EdgarZamora_1
Respected Contributor

Re: Increased overhead on RMS Global Buffers?

Attached is the same file on a larger scale.
Volker Halle
Honored Contributor
Solution

Re: Increased overhead on RMS Global Buffers?

Edgar,

the following explanation may answer your question:

If only Global buffers are set, the Local Cache Hit Percent will be zero because VMS looks in the Local Buffers before looking in the Global buffers. If the requested data is not in the Local Buffers, the Global buffers are searched for the data. If the data is not in the Global buffers, VMS gets the data from disk. VMS then puts the data in a Global buffer since Global buffers were the last place VMS checked for the data.

Found at:

http://www.ttinet.com/tti/SECRETS_FILE_IO.HTML

Volker.
Volker Halle
Honored Contributor

Re: Increased overhead on RMS Global Buffers?

Edgar,

from looking at your first set of MONITOR RMS data, I'd say you've saved 50% of physical IOs. You get about the same 'attempt rate' with half the amount of read-I/Os.

Volker.
EdgarZamora_1
Respected Contributor

Re: Increased overhead on RMS Global Buffers?

Volker... I found this in Guide to File Applications, section 7... it seems to indicate that global buffers are searched first before local buffers, which seems contrary to what that other link says...

"Even if global buffers are used, a minimal number of local buffers should be
requested, because, under certain circumstances, RMS may need to use local
buffers. When attempting to access a record, RMS looks first in the global buffer
cache for the record before looking in the local buffers; if the record is still not
found, an I/O operation occurs."

I will look into using local buffers, too. Thanks.
Jon Pinkley
Honored Contributor

Re: Increased overhead on RMS Global Buffers?

I think the info in the Touch Technologies, Inc. document, http://www.ttinet.com/tti/SECRETS_FILE_IO.HTML is pretty stale, since it has this text: Revised 21-Oct-1995.

There have been a lot of modifications to Global Buffers code since then, and Edgar did say he was using 8.3 and fairly recent patches.

Hein will probably respond, and he should know.

At any rate, if the file is being shared for update, you are almost certainly better off with global buffers.

Have you done some tests of elapsed time when there are multiple writers to the same file?

Jon
it depends
Jan van den Ende
Honored Contributor

Re: Increased overhead on RMS Global Buffers?

Edgar,

Jon wrote
>>>
At any rate, if the file is being shared for update, you are almost certainly better off with global buffers.
<<<
to specify "almost certainly":
On 8.3 (with XFC enabled) GLOBAL buffers start being advantaguous AS SOON AS the file is accessed SIMULTANUOUS with at least one of those accesses with UPDATE intent.
(and if you think about it, that is exactly what logic would predict)
hth

Proost.

Have one on me.

jpe
Don't rust yours pelled jacker to fine doll missed aches.
Hein van den Heuvel
Honored Contributor

Re: Increased overhead on RMS Global Buffers?

>> Please excuse my limited understanding of RMS global buffers. I'm just starting to tinker with it.

O yeah of little faith! :-)

Just start use them and make your bosses happy.
Be sure to have before and after performance picture (T4... notably for LOCK and KERNEL MODE activity, and my HOT_FILES tool that you could have picked up from BootCamp or HP Tech Forum proceedings),.

>> RMS /FILE= /ITEM=CACH shows "Loc Buf Cache Attempt Rate" (with hit percent around 99%)

So you can NOT expect global buffers to significantly reduce IO rates.
It is hard to improve on 100%. :-)
Stil, going from 98% to 99% can mean 1/2 the IOs.
But most importantly, you can expect to have obtained a significant lock traffic reduction which is a scares (sp?), serializing, resource.

>> I still see almost the same amount of Local Buffer cache attempt rate

Right. It TRIES the local buffer first, then the global, as other explained.
It will use the first found, but load to the global buffer UNLESS the target is accessed for write-intend.
I explain this in my RMS sessions, which are floating on the web.

>> due to the nature of the report program

It almost sounds (99%) that the program is re-reading a lot of data, in which case the better helper is to teach it to remember the records. But it could also be reading record after record, many from the same bucket/buffer doing only 1 IO every 100+ records for a fresh bucket full of records.

Those are an impressive rates though as per attachment. That should make a dent in CPU usage. Try and use my RMS_STATS program (OpenVMS freeware V6, or more recent version from me). It will show more counters, and in a single picture.

To reduce a tiny bit of CPU and memory access overhead, you may want to reduce the number of local buffer when global buffers are present (FAB$W_GBC), but that typically need program changes.

You failed to indicate HOW MANY global buffer were applied. With hundreds of IOs/sec you may want to go aggresive. Think THOUSANDS, not hundreds.
Mind you... when RMS counts IO, those are IO requests which are not unlikely to have been resolved by the XFC. So the actual IO rate to the disk might not have changed.
I pretty much only expect LOCK and KERNEL mode reduction, which out to be significant at 65K/second. So as I opened, check to for T4, hotfile or rms_stats detail infor to see the real gain.
The end user impact may have been surprisingly low, ut you may have help the system as a whole a lot.

Hope this helps,

Hein van den Heuvel ( at gmail dot com )
HvdH Performance Consulting
Guenther Froehlin
Valued Contributor

Re: Increased overhead on RMS Global Buffers?

Using global buffers should be done if a) the file is shared between processes on node node in a cluster and b) the load is more reads then writes. Otherwise local buffers do as well, unless a lot more buffers are needed. The max. number of local buffers is very limited. In some cases global buffers allow to cache the whole index tree and then some.

Even with global buffers, a buffer to be modified is copied to the process' local buffer and when requested again passes through the disk back into the global buffer. That why writes don't make a difference between local and global buffers.

Ah, what was the question?

/Guenther
Guenther Froehlin
Valued Contributor

Re: Increased overhead on RMS Global Buffers?

Oops I got this wrong. Only some operations like bucket splits have to happen in local buffers if global buffers are enabled.

/Guenther
EdgarZamora_1
Respected Contributor

Re: Increased overhead on RMS Global Buffers?

Thank you ALL for the great responses, most of which answered my question.

In response to Jon (and Jan),
>>>
At any rate, if the file is being shared for update, you are almost certainly better off with global buffers.

Have you done some tests of elapsed time when there are multiple writers to the same file?

to specify "almost certainly":
On 8.3 (with XFC enabled) GLOBAL buffers start being advantaguous AS SOON AS the file is accessed SIMULTANUOUS with at least one of those accesses with UPDATE intent.
(and if you think about it, that is exactly what logic would predict)
<<<

It seems that running with global buffers increases my elapsed time a little bit (single job, and two jobs concurrently, I haven't tested with more). I expected some overhead with the runtime/elapsed time so I'm not concerned about this. I expect the savings to be in system resources. My test jobs are report jobs though (readers, not writers) so I really didn't answer Jon's specific question. In production there is a contantly running job (24X7) that updates the file (as transactions are entered by users), and then a bunch of users run the report programs (during the day) against the file.

In response to Hein..
>>> But most importantly, you can expect to have obtained a significant lock traffic reduction which is a scares (sp?), serializing, resource. <<<

(scarce, btw) I already realized HUGE lock traffic reduction through implementing NQL (thanks greatly for your previous help on that one!)

>>>
You failed to indicate HOW MANY global buffer were applied. With hundreds of IOs/sec you may want to go aggresive. Think THOUSANDS, not hundreds.
Mind you... when RMS counts IO, those are IO requests which are not unlikely to have been resolved by the XFC. So the actual IO rate to the disk might not have changed.
I pretty much only expect LOCK and KERNEL mode reduction, which out to be significant at 65K/second. So as I opened, check to for T4, hotfile or rms_stats detail infor to see the real gain.
The end user impact may have been surprisingly low, ut you may have help the system as a whole a lot.
<<<

I started with Global Buffers of 128 because I wasn't sure on how it would affect my GBLSECTIONS and GBLPAGES. Now I see that I can increase it A LOT MORE. If I'm not mistaken, user working set sizes could be affected, right?... I better make sure I increase them if needed.

Also, I am using a couple of your tools already. Thank you so much for those!



Jan van den Ende
Honored Contributor

Re: Increased overhead on RMS Global Buffers?

Edgar,

>>>
Have you done some tests of elapsed time when there are multiple writers to the same file?
<<<

Well, in one (maybe extreme) case, an application wtih multiple keyed RMS files had a batch that was triggered about trice a week. It ran 45 - 60 minutes (during multiole interactive use).
The main file dat 11 (eleven) mostly segmented indexes.
I enabled 1000 global buffers for it, and I got "complaints" from the applic manager.
The job "failed", because it ran only 3 minutes. But all the work was done...

As I said, maybe an extreme example, but it DOES show the potential.

Proost.

Have one on me.

jpe
Don't rust yours pelled jacker to fine doll missed aches.
Hein van den Heuvel
Honored Contributor

Re: Increased overhead on RMS Global Buffers?

>> (scarce, btw)

Right. I _knew_ that! :-)

>> I already realized HUGE lock traffic reduction through implementing NQL (thanks greatly for your previous help on that one!)

Well, Global buffer will likely reduce lock activity by that same improvement once over.
It you liked it the first time... do it once more.
The reason for this is that even though NQL truly stops record lock, RMS still does bucket locks: Get bucket lock, read or 'find' bucket in cache, find record in bucket, release bucket.
With global buffers, and only with global buffer will RMS cache the bucket lock in CR (Concurrent Read) mode and only grab a new lock when the requests mode to a new (next) bucket. That's one ENQ and one DEQ saved per 'second' GET in a bucket, as well as that final bucket lock to find out there are no more records.

Global buffers and write access??
Well, you need to find where to write, for each of the keys. Typically that is 2 or 3 index bucket reads per key. Those can potentially all be resolved in CR mode from the global buffer cache.

>> Also, I am using a couple of your tools already. Thank you so much for those!

So you owe me a beer or two! For now just send me an Email an I may have an update or two.

Cheers,
Hein.