<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: Increased overhead on RMS Global Buffers? in Operating System - OpenVMS</title>
    <link>https://community.hpe.com/t5/operating-system-openvms/increased-overhead-on-rms-global-buffers/m-p/4265577#M91016</link>
    <description>Oops I got this wrong. Only some operations like bucket splits have to happen in local buffers if global buffers are enabled.&lt;BR /&gt;&lt;BR /&gt;/Guenther</description>
    <pubDate>Tue, 09 Sep 2008 22:19:37 GMT</pubDate>
    <dc:creator>Guenther Froehlin</dc:creator>
    <dc:date>2008-09-09T22:19:37Z</dc:date>
    <item>
      <title>Increased overhead on RMS Global Buffers?</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/increased-overhead-on-rms-global-buffers/m-p/4265568#M91007</link>
      <description>Please excuse my limited understanding of RMS global buffers.  I'm just starting to tinker with it.&lt;BR /&gt;&lt;BR /&gt;Before I put global buffers on a particular indexed file, the  MONITOR RMS /FILE= /ITEM=CACH shows "Loc Buf Cache Attempt Rate" (with hit percent around 99%) and shows zero "Glo Buf Cache Attempt Rate".  After I added global buffers to the file, I now see global buffer attempt rate (close to 99% hit rate).  However, I still see almost the same amount of Local Buffer cache attempt rate (with hit rate of 0%).  So now the total attempt rate (local + global) seems double what it was before global buffers was added.  What seems suspicious to me is that the local and global attempt rates are numerically very close to each other.  I suspect this is due to the nature of the report program running against the file, but I haven't looked at the code yet.  I was wondering if someone had a ready explanation for this type of behavior.&lt;BR /&gt;&lt;BR /&gt;(attached is monitor rms output)&lt;BR /&gt;&lt;BR /&gt;OVMS Alpha 8.3 with fairly recent patches.</description>
      <pubDate>Tue, 09 Sep 2008 12:45:14 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/increased-overhead-on-rms-global-buffers/m-p/4265568#M91007</guid>
      <dc:creator>EdgarZamora_1</dc:creator>
      <dc:date>2008-09-09T12:45:14Z</dc:date>
    </item>
    <item>
      <title>Re: Increased overhead on RMS Global Buffers?</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/increased-overhead-on-rms-global-buffers/m-p/4265569#M91008</link>
      <description>Attached is the same file on a larger scale.&lt;BR /&gt;</description>
      <pubDate>Tue, 09 Sep 2008 13:08:43 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/increased-overhead-on-rms-global-buffers/m-p/4265569#M91008</guid>
      <dc:creator>EdgarZamora_1</dc:creator>
      <dc:date>2008-09-09T13:08:43Z</dc:date>
    </item>
    <item>
      <title>Re: Increased overhead on RMS Global Buffers?</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/increased-overhead-on-rms-global-buffers/m-p/4265570#M91009</link>
      <description>Edgar,&lt;BR /&gt;&lt;BR /&gt;the following explanation may answer your question:&lt;BR /&gt;&lt;BR /&gt;If only Global buffers are set, the Local Cache Hit Percent will be zero because VMS looks in the Local Buffers before looking in the Global buffers. If the requested data is not in the Local Buffers, the Global buffers are searched for the data. If the data is not in the Global buffers, VMS gets the data from disk. VMS then puts the data in a Global buffer since Global buffers were the last place VMS checked for the data. &lt;BR /&gt;&lt;BR /&gt;Found at:&lt;BR /&gt;&lt;BR /&gt;&lt;A href="http://www.ttinet.com/tti/SECRETS_FILE_IO.HTML" target="_blank"&gt;http://www.ttinet.com/tti/SECRETS_FILE_IO.HTML&lt;/A&gt;&lt;BR /&gt;&lt;BR /&gt;Volker.</description>
      <pubDate>Tue, 09 Sep 2008 13:36:52 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/increased-overhead-on-rms-global-buffers/m-p/4265570#M91009</guid>
      <dc:creator>Volker Halle</dc:creator>
      <dc:date>2008-09-09T13:36:52Z</dc:date>
    </item>
    <item>
      <title>Re: Increased overhead on RMS Global Buffers?</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/increased-overhead-on-rms-global-buffers/m-p/4265571#M91010</link>
      <description>Edgar,&lt;BR /&gt;&lt;BR /&gt;from looking at your first set of MONITOR RMS data, I'd say you've saved 50% of physical IOs. You get about the same 'attempt rate' with half the amount of  read-I/Os.&lt;BR /&gt;&lt;BR /&gt;Volker.</description>
      <pubDate>Tue, 09 Sep 2008 13:43:18 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/increased-overhead-on-rms-global-buffers/m-p/4265571#M91010</guid>
      <dc:creator>Volker Halle</dc:creator>
      <dc:date>2008-09-09T13:43:18Z</dc:date>
    </item>
    <item>
      <title>Re: Increased overhead on RMS Global Buffers?</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/increased-overhead-on-rms-global-buffers/m-p/4265572#M91011</link>
      <description>Volker... I found this in Guide to File Applications, section 7... it seems to indicate that global buffers are searched first before local buffers, which seems contrary to what that other link says...&lt;BR /&gt;&lt;BR /&gt;"Even if global buffers are used, a minimal number of local buffers should be&lt;BR /&gt;requested, because, under certain circumstances, RMS may need to use local&lt;BR /&gt;buffers. When attempting to access a record, RMS looks first in the global buffer&lt;BR /&gt;cache for the record before looking in the local buffers; if the record is still not&lt;BR /&gt;found, an I/O operation occurs."&lt;BR /&gt;&lt;BR /&gt;I will look into using local buffers, too.  Thanks.</description>
      <pubDate>Tue, 09 Sep 2008 13:52:14 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/increased-overhead-on-rms-global-buffers/m-p/4265572#M91011</guid>
      <dc:creator>EdgarZamora_1</dc:creator>
      <dc:date>2008-09-09T13:52:14Z</dc:date>
    </item>
    <item>
      <title>Re: Increased overhead on RMS Global Buffers?</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/increased-overhead-on-rms-global-buffers/m-p/4265573#M91012</link>
      <description>I think the info in the Touch Technologies, Inc. document, &lt;A href="http://www.ttinet.com/tti/SECRETS_FILE_IO.HTML" target="_blank"&gt;http://www.ttinet.com/tti/SECRETS_FILE_IO.HTML&lt;/A&gt; is pretty stale, since it has this text:  Revised 21-Oct-1995.&lt;BR /&gt;&lt;BR /&gt;There have been a lot of modifications to Global Buffers code since then, and Edgar did say he was using 8.3 and fairly recent patches.&lt;BR /&gt;&lt;BR /&gt;Hein will probably respond, and he should know.&lt;BR /&gt;&lt;BR /&gt;At any rate, if the file is being shared for update, you are almost certainly better off with global buffers.&lt;BR /&gt;&lt;BR /&gt;Have you done some tests of elapsed time when there are multiple writers to the same file?&lt;BR /&gt;&lt;BR /&gt;Jon&lt;BR /&gt;</description>
      <pubDate>Tue, 09 Sep 2008 14:30:39 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/increased-overhead-on-rms-global-buffers/m-p/4265573#M91012</guid>
      <dc:creator>Jon Pinkley</dc:creator>
      <dc:date>2008-09-09T14:30:39Z</dc:date>
    </item>
    <item>
      <title>Re: Increased overhead on RMS Global Buffers?</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/increased-overhead-on-rms-global-buffers/m-p/4265574#M91013</link>
      <description>Edgar,&lt;BR /&gt;&lt;BR /&gt;Jon wrote&lt;BR /&gt;&amp;gt;&amp;gt;&amp;gt;&lt;BR /&gt;At any rate, if the file is being shared for update, you are almost certainly better off with global buffers.&lt;BR /&gt;&amp;lt;&amp;lt;&amp;lt;&lt;BR /&gt;to specify "almost certainly":&lt;BR /&gt;On 8.3 (with XFC enabled) GLOBAL buffers start being advantaguous AS SOON AS the file is accessed SIMULTANUOUS with at least one of those accesses with UPDATE intent.&lt;BR /&gt;(and if you think about it, that is exactly what logic would predict)&lt;BR /&gt;hth&lt;BR /&gt;&lt;BR /&gt;Proost.&lt;BR /&gt;&lt;BR /&gt;Have one on me.&lt;BR /&gt;&lt;BR /&gt;jpe&lt;BR /&gt;</description>
      <pubDate>Tue, 09 Sep 2008 14:43:31 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/increased-overhead-on-rms-global-buffers/m-p/4265574#M91013</guid>
      <dc:creator>Jan van den Ende</dc:creator>
      <dc:date>2008-09-09T14:43:31Z</dc:date>
    </item>
    <item>
      <title>Re: Increased overhead on RMS Global Buffers?</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/increased-overhead-on-rms-global-buffers/m-p/4265575#M91014</link>
      <description>&amp;gt;&amp;gt; Please excuse my limited understanding of RMS global buffers. I'm just starting to tinker with it.&lt;BR /&gt;&lt;BR /&gt;O yeah of little faith!  :-)&lt;BR /&gt;&lt;BR /&gt;Just start use them and make your bosses happy.&lt;BR /&gt;Be sure to have before and after performance picture (T4... notably for LOCK and KERNEL MODE activity, and my HOT_FILES tool that you could have picked up from BootCamp or HP Tech Forum proceedings),.&lt;BR /&gt;&lt;BR /&gt;&amp;gt;&amp;gt; RMS /FILE= /ITEM=CACH shows "Loc Buf Cache Attempt Rate" (with hit percent around 99%)&lt;BR /&gt;&lt;BR /&gt;So you can NOT expect global buffers to significantly reduce IO rates.&lt;BR /&gt;It is hard to improve on 100%. :-)&lt;BR /&gt;Stil, going from 98% to 99% can mean 1/2 the IOs.&lt;BR /&gt;But most importantly, you can expect to have obtained a significant lock traffic reduction which is a scares (sp?), serializing, resource.&lt;BR /&gt;&lt;BR /&gt;&amp;gt;&amp;gt; I still see almost the same amount of Local Buffer cache attempt rate&lt;BR /&gt;&lt;BR /&gt;Right. It TRIES the local buffer first, then the global, as other explained. &lt;BR /&gt;It will use the first found, but load to the global buffer UNLESS the target is accessed for write-intend.&lt;BR /&gt;I explain this in my RMS sessions, which are floating on the web.&lt;BR /&gt;&lt;BR /&gt;&amp;gt;&amp;gt; due to the nature of the report program &lt;BR /&gt;&lt;BR /&gt;It almost sounds (99%) that the program is re-reading a lot of data, in which case the better helper is to teach it to remember the records. But it could also be reading record after record, many from  the same bucket/buffer doing only 1 IO every 100+ records for a fresh bucket full of records.&lt;BR /&gt;&lt;BR /&gt;Those are an impressive rates though as per attachment. That should make a dent in CPU usage. Try and use my RMS_STATS program (OpenVMS freeware V6, or more recent version from me). It will show more counters, and in a single picture.&lt;BR /&gt;&lt;BR /&gt;To reduce a tiny bit of CPU and memory access overhead, you may want to reduce the number of local buffer when global buffers are present (FAB$W_GBC), but that typically need program changes.&lt;BR /&gt;&lt;BR /&gt;You failed to indicate HOW MANY global buffer were applied. With hundreds of IOs/sec you may want to go aggresive. Think THOUSANDS, not hundreds.&lt;BR /&gt;Mind you... when RMS counts IO, those are IO requests which are not unlikely to have been resolved by the XFC. So the actual IO rate to the disk might not have changed. &lt;BR /&gt;I pretty much only expect LOCK and KERNEL mode reduction, which out to be significant at 65K/second. So as I opened, check to for T4, hotfile or rms_stats detail infor to see the real gain. &lt;BR /&gt;The end user impact may have been surprisingly low, ut you may have help the system as a whole a lot.&lt;BR /&gt;&lt;BR /&gt;Hope this helps,&lt;BR /&gt;&lt;BR /&gt;Hein van den Heuvel ( at gmail dot com )&lt;BR /&gt;HvdH Performance Consulting &lt;BR /&gt;</description>
      <pubDate>Tue, 09 Sep 2008 21:10:22 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/increased-overhead-on-rms-global-buffers/m-p/4265575#M91014</guid>
      <dc:creator>Hein van den Heuvel</dc:creator>
      <dc:date>2008-09-09T21:10:22Z</dc:date>
    </item>
    <item>
      <title>Re: Increased overhead on RMS Global Buffers?</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/increased-overhead-on-rms-global-buffers/m-p/4265576#M91015</link>
      <description>Using global buffers should be done if a) the file is shared between processes on node node in a cluster and b) the load is more reads then writes. Otherwise local buffers do as well, unless a lot more buffers are needed. The max. number of local buffers is very limited. In some cases global buffers allow to cache the whole index tree and then some.&lt;BR /&gt;&lt;BR /&gt;Even with global buffers, a buffer to be modified is copied to the process' local buffer and when requested again passes through the disk back into the global buffer. That why writes don't make a difference between local and global buffers.&lt;BR /&gt;&lt;BR /&gt;Ah, what was the question?&lt;BR /&gt;&lt;BR /&gt;/Guenther</description>
      <pubDate>Tue, 09 Sep 2008 21:37:29 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/increased-overhead-on-rms-global-buffers/m-p/4265576#M91015</guid>
      <dc:creator>Guenther Froehlin</dc:creator>
      <dc:date>2008-09-09T21:37:29Z</dc:date>
    </item>
    <item>
      <title>Re: Increased overhead on RMS Global Buffers?</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/increased-overhead-on-rms-global-buffers/m-p/4265577#M91016</link>
      <description>Oops I got this wrong. Only some operations like bucket splits have to happen in local buffers if global buffers are enabled.&lt;BR /&gt;&lt;BR /&gt;/Guenther</description>
      <pubDate>Tue, 09 Sep 2008 22:19:37 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/increased-overhead-on-rms-global-buffers/m-p/4265577#M91016</guid>
      <dc:creator>Guenther Froehlin</dc:creator>
      <dc:date>2008-09-09T22:19:37Z</dc:date>
    </item>
    <item>
      <title>Re: Increased overhead on RMS Global Buffers?</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/increased-overhead-on-rms-global-buffers/m-p/4265578#M91017</link>
      <description>Thank you ALL for the great responses, most of which answered my question.&lt;BR /&gt;&lt;BR /&gt;In response to Jon (and Jan),&lt;BR /&gt;&amp;gt;&amp;gt;&amp;gt;&lt;BR /&gt;At any rate, if the file is being shared for update, you are almost certainly better off with global buffers.&lt;BR /&gt;&lt;BR /&gt;Have you done some tests of elapsed time when there are multiple writers to the same file?&lt;BR /&gt;&lt;BR /&gt; to specify "almost certainly":&lt;BR /&gt; On 8.3 (with XFC enabled) GLOBAL buffers start being advantaguous AS SOON AS the file is accessed SIMULTANUOUS with at least one of those accesses with UPDATE intent.&lt;BR /&gt; (and if you think about it, that is exactly what logic would predict)&lt;BR /&gt;&amp;lt;&amp;lt;&amp;lt;&lt;BR /&gt;&lt;BR /&gt;It seems that running with global buffers increases my elapsed time a little bit (single job, and two jobs concurrently, I haven't tested with more).  I expected some overhead with the runtime/elapsed time so I'm not concerned about this.  I expect the savings to be in system resources.  My test  jobs are report jobs though (readers, not writers) so I really didn't answer Jon's specific question.  In production there is a contantly running job (24X7) that updates the file (as transactions are entered by users), and then a bunch of users run the report programs (during the day) against the file.&lt;BR /&gt;&lt;BR /&gt;In response to Hein..&lt;BR /&gt;&amp;gt;&amp;gt;&amp;gt; But most importantly, you can expect to have obtained a significant lock traffic reduction which is a scares (sp?), serializing, resource. &amp;lt;&amp;lt;&amp;lt;&lt;BR /&gt;&lt;BR /&gt;(scarce, btw) I already realized HUGE lock traffic reduction through implementing NQL (thanks greatly for your previous help on that one!)&lt;BR /&gt;&lt;BR /&gt;&amp;gt;&amp;gt;&amp;gt;&lt;BR /&gt;You failed to indicate HOW MANY global buffer were applied. With hundreds of IOs/sec you may want to go aggresive. Think THOUSANDS, not hundreds.&lt;BR /&gt;Mind you... when RMS counts IO, those are IO requests which are not unlikely to have been resolved by the XFC. So the actual IO rate to the disk might not have changed. &lt;BR /&gt;I pretty much only expect LOCK and KERNEL mode reduction, which out to be significant at 65K/second. So as I opened, check to for T4, hotfile or rms_stats detail infor to see the real gain. &lt;BR /&gt;The end user impact may have been surprisingly low, ut you may have help the system as a whole a lot.&lt;BR /&gt;&amp;lt;&amp;lt;&amp;lt;&lt;BR /&gt;&lt;BR /&gt;I started with Global Buffers of 128 because I wasn't sure on how it would affect my GBLSECTIONS and GBLPAGES.  Now I see that I can increase it A LOT MORE.  If I'm not mistaken, user working set sizes could be affected, right?... I better make sure I increase them if needed.&lt;BR /&gt;&lt;BR /&gt;Also, I am using a couple of your tools already.  Thank you so much for those!&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Wed, 10 Sep 2008 12:43:45 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/increased-overhead-on-rms-global-buffers/m-p/4265578#M91017</guid>
      <dc:creator>EdgarZamora_1</dc:creator>
      <dc:date>2008-09-10T12:43:45Z</dc:date>
    </item>
    <item>
      <title>Re: Increased overhead on RMS Global Buffers?</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/increased-overhead-on-rms-global-buffers/m-p/4265579#M91018</link>
      <description>Edgar,&lt;BR /&gt;&lt;BR /&gt;&amp;gt;&amp;gt;&amp;gt;&lt;BR /&gt;Have you done some tests of elapsed time when there are multiple writers to the same file?&lt;BR /&gt;&amp;lt;&amp;lt;&amp;lt;&lt;BR /&gt;&lt;BR /&gt;Well, in one (maybe extreme) case, an application wtih multiple keyed RMS files had a batch that was triggered about trice a week. It ran 45 - 60 minutes (during multiole interactive use).&lt;BR /&gt;The main file dat 11 (eleven) mostly segmented indexes.&lt;BR /&gt;I enabled 1000 global buffers for it, and I got "complaints" from the applic manager.&lt;BR /&gt;The job "failed", because it ran only 3 minutes. But all the work was done...&lt;BR /&gt;&lt;BR /&gt;As I said, maybe an extreme example, but it DOES show the potential.&lt;BR /&gt;&lt;BR /&gt;Proost.&lt;BR /&gt;&lt;BR /&gt;Have one on me.&lt;BR /&gt;&lt;BR /&gt;jpe</description>
      <pubDate>Wed, 10 Sep 2008 16:55:29 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/increased-overhead-on-rms-global-buffers/m-p/4265579#M91018</guid>
      <dc:creator>Jan van den Ende</dc:creator>
      <dc:date>2008-09-10T16:55:29Z</dc:date>
    </item>
    <item>
      <title>Re: Increased overhead on RMS Global Buffers?</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/increased-overhead-on-rms-global-buffers/m-p/4265580#M91019</link>
      <description>&amp;gt;&amp;gt; (scarce, btw) &lt;BR /&gt;&lt;BR /&gt;Right. I _knew_ that! :-)&lt;BR /&gt;&lt;BR /&gt;&amp;gt;&amp;gt; I already realized HUGE lock traffic reduction through implementing NQL (thanks greatly for your previous help on that one!)&lt;BR /&gt;&lt;BR /&gt;Well, Global buffer will likely reduce lock activity by that same improvement once over.&lt;BR /&gt;It you liked it the first time... do it once more. &lt;BR /&gt;The reason for this is that even though NQL truly stops record lock, RMS still does bucket locks: Get bucket lock, read or 'find' bucket in cache, find record in bucket, release bucket. &lt;BR /&gt;With global buffers, and only with global  buffer will RMS cache the bucket lock in CR (Concurrent Read) mode and only grab a new lock when the requests mode to a new (next) bucket. That's one ENQ and one DEQ saved per 'second' GET in a bucket, as well as that final bucket lock to find out there are no more records.&lt;BR /&gt;&lt;BR /&gt;Global buffers and write access??&lt;BR /&gt;Well, you need to find where to write, for each of the keys. Typically that is 2 or 3 index bucket reads per key. Those can potentially all be resolved in CR mode from the global buffer cache.&lt;BR /&gt;&lt;BR /&gt;&amp;gt;&amp;gt; Also, I am using a couple of your tools already. Thank you so much for those!&lt;BR /&gt;&lt;BR /&gt;So you owe me a beer or two! For now just send me an Email an I may have an update or two.&lt;BR /&gt;&lt;BR /&gt;Cheers,&lt;BR /&gt;Hein.&lt;BR /&gt;</description>
      <pubDate>Thu, 11 Sep 2008 01:25:14 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/increased-overhead-on-rms-global-buffers/m-p/4265580#M91019</guid>
      <dc:creator>Hein van den Heuvel</dc:creator>
      <dc:date>2008-09-11T01:25:14Z</dc:date>
    </item>
  </channel>
</rss>

