<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: High disk IO and low hit ratio in Operating System - OpenVMS</title>
    <link>https://community.hpe.com/t5/operating-system-openvms/high-disk-io-and-low-hit-ratio/m-p/4724302#M19150</link>
    <description>1)&lt;BR /&gt;Controller:&lt;BR /&gt;        HSZ80 ZG94710176 Software V83Z-0, Hardware  E04&lt;BR /&gt;        NODE_ID          = 0000-0000-0000-0000&lt;BR /&gt;controller cache is good and most of the disks run from this storage but only with one we have problem.&lt;BR /&gt;&lt;BR /&gt;Disks are COMPAQ   BD009122C6&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;2) Version of VMS etc.?&lt;BR /&gt;&lt;BR /&gt;With the sysgen parameter setup as Well.&lt;BR /&gt;Actually it is almost "out of the shelf" product so it's copy paste based installation.&lt;BR /&gt;&lt;BR /&gt;VMS V7.3-2&lt;BR /&gt;&lt;BR /&gt;I was running line by line comparation of system parameters and database configuration.&lt;BR /&gt;&lt;BR /&gt;FR51:SYSTEM&amp;gt; show mem /cache&lt;BR /&gt;              System Memory Resources on  9-DEC-2010 15:49:23.51&lt;BR /&gt;&lt;BR /&gt;Extended File Cache  (Time of last reset:  1-DEC-2010 05:16:27.56)&lt;BR /&gt; Allocated (GBytes)              2.46    Maximum size (GBytes)             4.00&lt;BR /&gt; Free (GBytes)                   0.00    Minimum size (GBytes)             0.00&lt;BR /&gt; In use (GBytes)                 2.46    Percentage Read I/Os                37%&lt;BR /&gt; Read hit rate                     62%   Write hit rate                       0%&lt;BR /&gt; Read I/O count              42800824    Write I/O count               72721312&lt;BR /&gt; Read hit count              26682795    Write hit count                      0&lt;BR /&gt; Reads bypassing cache             45    Writes bypassing cache               0&lt;BR /&gt; Files cached open                654    Files cached closed                 99&lt;BR /&gt; Vols in Full XFC mode              0    Vols in VIOC Compatible mode        24&lt;BR /&gt; Vols in No Caching mode            0    Vols in Perm. No Caching mode        0&lt;BR /&gt;&lt;BR /&gt;Write Bitmap (WBM) Memory Summary&lt;BR /&gt;  Local bitmap count:    48     Local bitmap memory usage (MB)          1.78&lt;BR /&gt;  Master bitmap count:   24     Master bitmap memory usage (KB)       912.00&lt;BR /&gt;&lt;BR /&gt;FR61:SMSC&amp;gt; show mem /cache&lt;BR /&gt;              System Memory Resources on  9-DEC-2010 15:49:30.96&lt;BR /&gt;&lt;BR /&gt;Extended File Cache  (Time of last reset: 19-AUG-2010 08:41:17.07)&lt;BR /&gt; Allocated (GBytes)              2.49    Maximum size (GBytes)             4.00&lt;BR /&gt; Free (GBytes)                   0.00    Minimum size (GBytes)             0.00&lt;BR /&gt; In use (GBytes)                 2.49    Percentage Read I/Os                34%&lt;BR /&gt; Read hit rate                     96%   Write hit rate                       0%&lt;BR /&gt; Read I/O count             279294453    Write I/O count              538704240&lt;BR /&gt; Read hit count             270892352    Write hit count                      0&lt;BR /&gt; Reads bypassing cache            416    Writes bypassing cache         1313869&lt;BR /&gt; Files cached open                662    Files cached closed                100&lt;BR /&gt; Vols in Full XFC mode              0    Vols in VIOC Compatible mode        24&lt;BR /&gt; Vols in No Caching mode            0    Vols in Perm. No Caching mode        0&lt;BR /&gt;&lt;BR /&gt;Write Bitmap (WBM) Memory Summary&lt;BR /&gt;  Local bitmap count:    48     Local bitmap memory usage (MB)          1.78&lt;BR /&gt;  Master bitmap count:   24     Master bitmap memory usage (KB)       912.00&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;Thanks,&lt;BR /&gt;Dan</description>
    <pubDate>Thu, 09 Dec 2010 14:53:09 GMT</pubDate>
    <dc:creator>Grzegorz Pawlowski</dc:creator>
    <dc:date>2010-12-09T14:53:09Z</dc:date>
    <item>
      <title>High disk IO and low hit ratio</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/high-disk-io-and-low-hit-ratio/m-p/4724299#M19147</link>
      <description>Recently on one cluster (3xES40) we had some memory issues which coused crashes.&lt;BR /&gt;Memory banks were replaced but now we face some High IO and CPU issues.&lt;BR /&gt;&lt;BR /&gt;I found that one disk with Oracle RDB has higher IOs that it suposed to have.&lt;BR /&gt;On twin cluster IO are 30 and here are around 210.&lt;BR /&gt;Disk are shadowsets from SCSI HZ80.&lt;BR /&gt;&lt;BR /&gt;I've noticed that also memory hit ratio is 64% on bad cluster whereas on good one we have 98%.&lt;BR /&gt;&lt;BR /&gt;Also in RDB statistics we see that there are a lot of direct reads and writes.&lt;BR /&gt;&lt;BR /&gt;Please help me what can I check and what to do.&lt;BR /&gt;My two clusters are identical with HW and SW but with the same load there is 2,5 difference in CPU usage becouse of this IO and memory.</description>
      <pubDate>Thu, 09 Dec 2010 14:06:14 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/high-disk-io-and-low-hit-ratio/m-p/4724299#M19147</guid>
      <dc:creator>Grzegorz Pawlowski</dc:creator>
      <dc:date>2010-12-09T14:06:14Z</dc:date>
    </item>
    <item>
      <title>Re: High disk IO and low hit ratio</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/high-disk-io-and-low-hit-ratio/m-p/4724300#M19148</link>
      <description>A true answer may require additional information, but for starts:&lt;BR /&gt;&lt;BR /&gt;1) Can you please describe the actual hardware configuration including controllers.&lt;BR /&gt;&lt;BR /&gt;2) Version of VMS etc.?&lt;BR /&gt;&lt;BR /&gt;You indicate that both clusters are identical.  Is this true of the sysgen parameter setup as well? (Other than node specific names and adddresses, etc.)&lt;BR /&gt;&lt;BR /&gt;Thanks,&lt;BR /&gt;Dan</description>
      <pubDate>Thu, 09 Dec 2010 14:31:39 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/high-disk-io-and-low-hit-ratio/m-p/4724300#M19148</guid>
      <dc:creator>abrsvc</dc:creator>
      <dc:date>2010-12-09T14:31:39Z</dc:date>
    </item>
    <item>
      <title>Re: High disk IO and low hit ratio</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/high-disk-io-and-low-hit-ratio/m-p/4724301#M19149</link>
      <description>This reeks of a physical memory downgrade; of the removal of part of the memory after those "memory banks were replaced".</description>
      <pubDate>Thu, 09 Dec 2010 14:48:31 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/high-disk-io-and-low-hit-ratio/m-p/4724301#M19149</guid>
      <dc:creator>Hoff</dc:creator>
      <dc:date>2010-12-09T14:48:31Z</dc:date>
    </item>
    <item>
      <title>Re: High disk IO and low hit ratio</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/high-disk-io-and-low-hit-ratio/m-p/4724302#M19150</link>
      <description>1)&lt;BR /&gt;Controller:&lt;BR /&gt;        HSZ80 ZG94710176 Software V83Z-0, Hardware  E04&lt;BR /&gt;        NODE_ID          = 0000-0000-0000-0000&lt;BR /&gt;controller cache is good and most of the disks run from this storage but only with one we have problem.&lt;BR /&gt;&lt;BR /&gt;Disks are COMPAQ   BD009122C6&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;2) Version of VMS etc.?&lt;BR /&gt;&lt;BR /&gt;With the sysgen parameter setup as Well.&lt;BR /&gt;Actually it is almost "out of the shelf" product so it's copy paste based installation.&lt;BR /&gt;&lt;BR /&gt;VMS V7.3-2&lt;BR /&gt;&lt;BR /&gt;I was running line by line comparation of system parameters and database configuration.&lt;BR /&gt;&lt;BR /&gt;FR51:SYSTEM&amp;gt; show mem /cache&lt;BR /&gt;              System Memory Resources on  9-DEC-2010 15:49:23.51&lt;BR /&gt;&lt;BR /&gt;Extended File Cache  (Time of last reset:  1-DEC-2010 05:16:27.56)&lt;BR /&gt; Allocated (GBytes)              2.46    Maximum size (GBytes)             4.00&lt;BR /&gt; Free (GBytes)                   0.00    Minimum size (GBytes)             0.00&lt;BR /&gt; In use (GBytes)                 2.46    Percentage Read I/Os                37%&lt;BR /&gt; Read hit rate                     62%   Write hit rate                       0%&lt;BR /&gt; Read I/O count              42800824    Write I/O count               72721312&lt;BR /&gt; Read hit count              26682795    Write hit count                      0&lt;BR /&gt; Reads bypassing cache             45    Writes bypassing cache               0&lt;BR /&gt; Files cached open                654    Files cached closed                 99&lt;BR /&gt; Vols in Full XFC mode              0    Vols in VIOC Compatible mode        24&lt;BR /&gt; Vols in No Caching mode            0    Vols in Perm. No Caching mode        0&lt;BR /&gt;&lt;BR /&gt;Write Bitmap (WBM) Memory Summary&lt;BR /&gt;  Local bitmap count:    48     Local bitmap memory usage (MB)          1.78&lt;BR /&gt;  Master bitmap count:   24     Master bitmap memory usage (KB)       912.00&lt;BR /&gt;&lt;BR /&gt;FR61:SMSC&amp;gt; show mem /cache&lt;BR /&gt;              System Memory Resources on  9-DEC-2010 15:49:30.96&lt;BR /&gt;&lt;BR /&gt;Extended File Cache  (Time of last reset: 19-AUG-2010 08:41:17.07)&lt;BR /&gt; Allocated (GBytes)              2.49    Maximum size (GBytes)             4.00&lt;BR /&gt; Free (GBytes)                   0.00    Minimum size (GBytes)             0.00&lt;BR /&gt; In use (GBytes)                 2.49    Percentage Read I/Os                34%&lt;BR /&gt; Read hit rate                     96%   Write hit rate                       0%&lt;BR /&gt; Read I/O count             279294453    Write I/O count              538704240&lt;BR /&gt; Read hit count             270892352    Write hit count                      0&lt;BR /&gt; Reads bypassing cache            416    Writes bypassing cache         1313869&lt;BR /&gt; Files cached open                662    Files cached closed                100&lt;BR /&gt; Vols in Full XFC mode              0    Vols in VIOC Compatible mode        24&lt;BR /&gt; Vols in No Caching mode            0    Vols in Perm. No Caching mode        0&lt;BR /&gt;&lt;BR /&gt;Write Bitmap (WBM) Memory Summary&lt;BR /&gt;  Local bitmap count:    48     Local bitmap memory usage (MB)          1.78&lt;BR /&gt;  Master bitmap count:   24     Master bitmap memory usage (KB)       912.00&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;Thanks,&lt;BR /&gt;Dan</description>
      <pubDate>Thu, 09 Dec 2010 14:53:09 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/high-disk-io-and-low-hit-ratio/m-p/4724302#M19150</guid>
      <dc:creator>Grzegorz Pawlowski</dc:creator>
      <dc:date>2010-12-09T14:53:09Z</dc:date>
    </item>
    <item>
      <title>Re: High disk IO and low hit ratio</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/high-disk-io-and-low-hit-ratio/m-p/4724303#M19151</link>
      <description>Hoff,&lt;BR /&gt;&lt;BR /&gt;First bank was replaced as it was rejected by the system at the startup test after crash.&lt;BR /&gt;Second was replaced as there were some "corectable parity errors".&lt;BR /&gt;&lt;BR /&gt;HP support stated that now everything should be ok with memory.&lt;BR /&gt;&lt;BR /&gt;Do you think that clusterwide reboot can help with this issue?&lt;BR /&gt;DB was the only thing that was not rebooted after memory exchange.</description>
      <pubDate>Thu, 09 Dec 2010 15:00:16 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/high-disk-io-and-low-hit-ratio/m-p/4724303#M19151</guid>
      <dc:creator>Grzegorz Pawlowski</dc:creator>
      <dc:date>2010-12-09T15:00:16Z</dc:date>
    </item>
    <item>
      <title>Re: High disk IO and low hit ratio</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/high-disk-io-and-low-hit-ratio/m-p/4724304#M19152</link>
      <description>My first impression is that you are not comparing correctly.  One machine has been up for a longer period and that alone will skew the stats a bit.  The only fair test is to note the read and write stats (counts only) before each test and adgain when the tests complete.  Compare the hard counts and the cache hits between those two points.  That at least will give you a more accurate comparison.  While not 10% it will be a better comparison that the overall stats.&lt;BR /&gt;&lt;BR /&gt;Start there are see what the true difference is.  Report that here please.&lt;BR /&gt;&lt;BR /&gt;Thanks,&lt;BR /&gt;Dan</description>
      <pubDate>Thu, 09 Dec 2010 15:02:22 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/high-disk-io-and-low-hit-ratio/m-p/4724304#M19152</guid>
      <dc:creator>abrsvc</dc:creator>
      <dc:date>2010-12-09T15:02:22Z</dc:date>
    </item>
    <item>
      <title>Re: High disk IO and low hit ratio</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/high-disk-io-and-low-hit-ratio/m-p/4724305#M19153</link>
      <description>&amp;gt;HP support stated that now everything should be ok with memory.&lt;BR /&gt;&lt;BR /&gt;"Trust, but verify."&lt;BR /&gt;&lt;BR /&gt;I'd confirm the quantity of viable physical memory within the server is the same both before and after the repairs, and I'd also confirm that the hardware caches are not shut off.&lt;BR /&gt;&lt;BR /&gt;Last I checked, Rdb didn't use XFC.&lt;BR /&gt;&lt;BR /&gt;This pairing:&lt;BR /&gt;&lt;BR /&gt;&amp;gt;Reads bypassing cache 45 Writes bypassing cache 0&lt;BR /&gt;&lt;BR /&gt;&amp;gt;Reads bypassing cache 416 Writes bypassing cache 1313869&lt;BR /&gt;&lt;BR /&gt;is interesting.  That reeks of database activity (as databases tend to have their own internal and tailored I/O caching, as do various other I/O heavy applications; generic caches aren't as effective), or possibly a whole pile of corner-case I/O requests (INITIALIZE /ERASE, etc) aimed at the storage.&lt;BR /&gt;&lt;BR /&gt;&amp;gt;First bank was replaced as it was rejected by the system at the startup test after crash.&lt;BR /&gt;&amp;gt;Second was replaced as there were some "corectable parity errors".&lt;BR /&gt;&lt;BR /&gt;Some parity errors are expected.  Piles of parity errors are a problem, particularly when they're occurring within the same chip.  Uncorrectable memory errors are a bigger problem.&lt;BR /&gt;&lt;BR /&gt;&amp;gt;Do you think that clusterwide reboot can help with this issue?&lt;BR /&gt;&lt;BR /&gt;That wouldn't be my immediate approach.&lt;BR /&gt;&lt;BR /&gt;&amp;gt;DB was the only thing that was not rebooted after memory exchange.&lt;BR /&gt;&lt;BR /&gt;Do you have a history of performance data here; does the current performance diverge in any dimensions from historical norms?  You mention 2.5x factor with the CPU performance.  &lt;BR /&gt;&lt;BR /&gt;A system that's well-tuned will generally be CPU bound (in user mode!), so I might go look at the modes, and at the box that's not running CPU bound, looking for differences.&lt;BR /&gt;&lt;BR /&gt;Applications that are not CPU bound can be I/O bound, or can be hitting memory stalls, for instance, and you might be able to relieve those bottlenecks and get the box back to being CPU-bound.&lt;BR /&gt;&lt;BR /&gt;I'm not sure what you mean by "On twin cluster IO are 30 and here are around 210."  I tend to look for disk I/O queue depths, rather than rates.   Rates might or might not be a problem, but queue depths are an indication of having reached a bandwidth limit.&lt;BR /&gt;&lt;BR /&gt;Is it possible that your host is simply carrying more of the activity here, having had network connections and such fail over during the repairs?  (Put another way, is the aggregate CPU load following your historical norms, and is currently just skewed more onto one box?)&lt;BR /&gt;&lt;BR /&gt;FWIW, multiple hosts sharing and coordinating storage in a cluster will always run with more overhead and with somewhat lower performance than will one node, up until the capacity of that node is exceeded.  This due to contention.   If anything, you get the best performance by loading each host to saturation, and by splitting up the resources and hardware and data and applications to try to avoid contention among the hosts sharing the load.&lt;BR /&gt;&lt;BR /&gt;Longer term, recognize that your gear here is old and slow, and it might be time to look at an upgrade.  A couple of low-end Itanium boxes with a multi-host SCSI shelf (the MSA30-MI, if that's still around) will completely dust this Alpha configuration.&lt;BR /&gt;</description>
      <pubDate>Thu, 09 Dec 2010 16:27:38 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/high-disk-io-and-low-hit-ratio/m-p/4724305#M19153</guid>
      <dc:creator>Hoff</dc:creator>
      <dc:date>2010-12-09T16:27:38Z</dc:date>
    </item>
    <item>
      <title>Re: High disk IO and low hit ratio</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/high-disk-io-and-low-hit-ratio/m-p/4724306#M19154</link>
      <description>&amp;gt;&amp;gt;HP support stated that now everything should be ok with memory.&lt;BR /&gt;&lt;BR /&gt;&amp;gt;"Trust, but verify."&lt;BR /&gt;&lt;BR /&gt;&amp;gt;I'd confirm the quantity of viable physical memory within the server is the same both before and after the repairs, and I'd also confirm that the hardware caches are not shut off.&lt;BR /&gt;&lt;BR /&gt;Memory quantity is the same as before exchange.&lt;BR /&gt;Where to check this hardware cache?&lt;BR /&gt;&lt;BR /&gt;&amp;gt; ...That reeks of database activity (as databases tend to have their own internal and tailored I/O caching...&lt;BR /&gt;&lt;BR /&gt;I've noticed that on bad system in DB most of read and writes are on root file. On good systems reads are mostly on snapfiles.&lt;BR /&gt;&lt;BR /&gt;&amp;gt;Do you have a history of performance data here; does the current performance diverge in any dimensions from historical norms? You mention 2.5x factor with the CPU performance. &lt;BR /&gt;&lt;BR /&gt;I'll dig through documentation and old healthcheck but according to specs this system suppose to handle more than it is able now.&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;&amp;gt;I'm not sure what you mean by "On twin cluster"&lt;BR /&gt; &lt;BR /&gt;By this i mean perfect copy of software and hardware and system setting. Both clusters are for transactions and have the same content in database.&lt;BR /&gt;They sit behing loadballancer which shares the traffic equally.&lt;BR /&gt;Even by checking all application counters I see the load is even.&lt;BR /&gt;Oracle RDB is on this only disk with high IO rate.&lt;BR /&gt;I've compared queue and on bad one average is 0,3 and on other is 0,00.&lt;BR /&gt;&lt;BR /&gt;&amp;gt; Longer term, recognize that your gear here is old and slow, and it might be time to look at an upgrade. A couple of low-end Itanium boxes with a multi-host SCSI shelf (the MSA30-MI, if that's still around) will completely dust this Alpha configuration.&lt;BR /&gt;&lt;BR /&gt;If it would be my HW I would change it a long time ago. Unfortunatly it is customer call and money. :)&lt;BR /&gt;&lt;BR /&gt;PS. If you have any commands that would be helpfull in finding bottleneck they would be usefull.</description>
      <pubDate>Fri, 10 Dec 2010 10:05:18 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/high-disk-io-and-low-hit-ratio/m-p/4724306#M19154</guid>
      <dc:creator>Grzegorz Pawlowski</dc:creator>
      <dc:date>2010-12-10T10:05:18Z</dc:date>
    </item>
    <item>
      <title>Re: High disk IO and low hit ratio</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/high-disk-io-and-low-hit-ratio/m-p/4724307#M19155</link>
      <description>&amp;gt;&amp;gt; &amp;gt;Reads bypassing cache 45 Writes bypassing cache 0&lt;BR /&gt;&amp;gt;&amp;gt; &amp;gt;Reads bypassing cache 416 Writes bypassing cache 1313869&lt;BR /&gt;&amp;gt;&amp;gt; ...&lt;BR /&gt;&amp;gt;&amp;gt; or possibly a whole pile of corner-case I/O requests (INITIALIZE /ERASE,&lt;BR /&gt;&amp;gt;&amp;gt; etc) aimed at the storage.&lt;BR /&gt;Yes, thats right.&lt;BR /&gt;These counts corresponds to XFC's readaround and writearound IO counts.&lt;BR /&gt;&lt;BR /&gt;When the pre-conditions for XFC to cache a IO fails, then the IO would skip the&lt;BR /&gt;XFC cache. XFC considers these IO as Readaround (for Read IO's),&lt;BR /&gt;Writearound (for Write IO's). XFC would however keep a track of count of such&lt;BR /&gt;IO's. Pre conditions would be - caching disabled on file, caching disabled on&lt;BR /&gt;IO, IO block size is greater than VCC_MAX_IO_SIZE and so on...&lt;BR /&gt;&lt;BR /&gt;&amp;gt;&amp;gt; Last I checked, Rdb didn't use XFC.&lt;BR /&gt;By this, do you mean&lt;BR /&gt;XFC is disabled on the system and hence would not come in to the picture.&lt;BR /&gt;OR&lt;BR /&gt;XFC is enabled. But Rdb uses its own cache hence and most of the requests&lt;BR /&gt;would get satisfied from its cache itself and may be a small number of its&lt;BR /&gt;IO's might go through XFC.&lt;BR /&gt;&lt;BR /&gt;Regards,&lt;BR /&gt;Murali</description>
      <pubDate>Fri, 10 Dec 2010 10:17:17 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/high-disk-io-and-low-hit-ratio/m-p/4724307#M19155</guid>
      <dc:creator>P Muralidhar Kini</dc:creator>
      <dc:date>2010-12-10T10:17:17Z</dc:date>
    </item>
    <item>
      <title>Re: High disk IO and low hit ratio</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/high-disk-io-and-low-hit-ratio/m-p/4724308#M19156</link>
      <description>PMK:&lt;BR /&gt;&lt;BR /&gt;&lt;A href="http://download.oracle.com/otndocs/products/rdb/pdf/rdbtf05_buffering.pdf" target="_blank"&gt;http://download.oracle.com/otndocs/products/rdb/pdf/rdbtf05_buffering.pdf&lt;/A&gt;&lt;BR /&gt;&lt;BR /&gt;Tools such as memory management and caching are generic, and work adequately well for most applications.  Higher-load applications can and variously will implement local memory management and local caching, as the generic applications aren't able to correctly cache I/O.   Rdb and caching requirements have been a moving target; check the Rdb documentation for specific recommendations around whether you want XFC caching enabled or not.   In various configurations, disabling host caching has been the recommendation.  (And if that's disabled, you'll see stuff going past the cacches.)&lt;BR /&gt;&lt;BR /&gt;For some Rdb activities, such as the RUJ and AIJ, having caching is somewhere between futile and wasteful:&lt;BR /&gt;&lt;BR /&gt;&lt;A href="http://download.oracle.com/otndocs/products/rdb/pdf/forums_2006/rdbtf06rs_18_sortedidxbperf.pdf" target="_blank"&gt;http://download.oracle.com/otndocs/products/rdb/pdf/forums_2006/rdbtf06rs_18_sortedidxbperf.pdf&lt;/A&gt;&lt;BR /&gt;&lt;BR /&gt;GP:&lt;BR /&gt;&lt;BR /&gt;&amp;gt; Where to check this hardware cache?&lt;BR /&gt;&lt;BR /&gt;I look for indications of cache errors in the error log.&lt;BR /&gt;&lt;BR /&gt;&amp;gt; I've compared queue and on bad one average is 0,3 and on other is 0,00.&lt;BR /&gt;&lt;BR /&gt;Time to figure out what part of Rdb is tossing out I/O.  Is there, for instance, some log file that's gone and gotten overly busy?&lt;BR /&gt;&lt;BR /&gt;&amp;gt;If it would be my HW I would change it a long time ago. Unfortunatly it is customer call and money. :)&lt;BR /&gt;&amp;gt;PS. If you have any commands that would be helpfull in finding bottleneck they would be usefull.&lt;BR /&gt;&lt;BR /&gt;Talk to your manager and sort out your escalation process, as well as whatever plans might be appropriate for getting off of boat-anchor hardware.</description>
      <pubDate>Fri, 10 Dec 2010 14:59:18 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/high-disk-io-and-low-hit-ratio/m-p/4724308#M19156</guid>
      <dc:creator>Hoff</dc:creator>
      <dc:date>2010-12-10T14:59:18Z</dc:date>
    </item>
    <item>
      <title>Re: High disk IO and low hit ratio</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/high-disk-io-and-low-hit-ratio/m-p/4724309#M19157</link>
      <description>Hoff,&lt;BR /&gt;Thanks for the response. i will read through the pointers that you have provided.&lt;BR /&gt;&lt;BR /&gt;Regards,&lt;BR /&gt;Murali</description>
      <pubDate>Mon, 13 Dec 2010 03:40:43 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/high-disk-io-and-low-hit-ratio/m-p/4724309#M19157</guid>
      <dc:creator>P Muralidhar Kini</dc:creator>
      <dc:date>2010-12-13T03:40:43Z</dc:date>
    </item>
  </channel>
</rss>

