<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: Adjusting CLUSTER_CREDITS in Operating System - OpenVMS</title>
    <link>https://community.hpe.com/t5/operating-system-openvms/adjusting-cluster-credits/m-p/3925652#M81124</link>
    <description>Jack,&lt;BR /&gt;&lt;BR /&gt;  Always keep in mind that most SYSGEN defaults were formulated quite some time ago, when resource costs were considerably higher than they are today.&lt;BR /&gt;&lt;BR /&gt;  Each buffer is SCSMAXMSG bytes in size (or maybe the default buffer size of the interconnect device?). You have CLUSTER_CREDITS * MAX(SCSMAXMSG,device buffer size) * Number-of-other-cluster-nodes bytes allocated in non-paged-pool. In your case it's only one other node, so the overhead probably isn't too significant, and you probably have orders of magnitude more memory than was expected when the default value was chosen.&lt;BR /&gt;&lt;BR /&gt;  Since you don't have much system downtime, if you're concerned about potential NPAGEDYN issues, start with (say) 60 and keep an eye on NPAGEDYN consumption. &lt;BR /&gt;&lt;BR /&gt;  If you've got a decent amount of memory, don't spend too much time worrying about it, just crank CLUSTER_CREDITS up to maximum (128), and add the line to MODPARAMS:&lt;BR /&gt;&lt;BR /&gt;ADD_NPAGEDYN=CLUSTER_CREDITS*SCSMAXMSG&lt;BR /&gt;&lt;BR /&gt;  Also note that you only need to increase the node that's experiencing credit waits. There is no need for the parameters to be equal on both nodes.</description>
    <pubDate>Sun, 14 Jan 2007 19:24:34 GMT</pubDate>
    <dc:creator>John Gillings</dc:creator>
    <dc:date>2007-01-14T19:24:34Z</dc:date>
    <item>
      <title>Adjusting CLUSTER_CREDITS</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/adjusting-cluster-credits/m-p/3925650#M81122</link>
      <description>2-node cluster&lt;BR /&gt;VMS V7.3-2&lt;BR /&gt;&lt;BR /&gt;I have noticed (via SHOW CLUSTER/CONTINUOUS) that there are occassional Cluster Waits occurring.  I am planning to adjust the CLUSTER_CREDITS setting to reduce/eliminate these waits.&lt;BR /&gt;&lt;BR /&gt;The present CLUSTER_CREDITS setting on both nodes is 30 and the max is (I think) 128.  One HP doc recommends changing the setting in increments of 5.&lt;BR /&gt;&lt;BR /&gt;Since this is a static parameter and I can't get these systems down very often, I was wondering how others decide on selecting a CLUSTER_CREDITS value.&lt;BR /&gt;&lt;BR /&gt;TIA</description>
      <pubDate>Fri, 12 Jan 2007 16:31:56 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/adjusting-cluster-credits/m-p/3925650#M81122</guid>
      <dc:creator>Jack Trachtman</dc:creator>
      <dc:date>2007-01-12T16:31:56Z</dc:date>
    </item>
    <item>
      <title>Re: Adjusting CLUSTER_CREDITS</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/adjusting-cluster-credits/m-p/3925651#M81123</link>
      <description>&lt;BR /&gt;You may also want to consider checking your network (assuming a ethernet cluster interconnect). Use &lt;BR /&gt;&lt;BR /&gt;$ MCR LANCP SHOW DEV /COUNT&lt;BR /&gt;&lt;BR /&gt;The bandwidth on the network may also be a factor.  A second interface with a cross over cable makes a very reliable second interface.  Other than confirming the speed/duplex there is no other configuration required.&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;Andy</description>
      <pubDate>Fri, 12 Jan 2007 17:05:08 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/adjusting-cluster-credits/m-p/3925651#M81123</guid>
      <dc:creator>Andy Bustamante</dc:creator>
      <dc:date>2007-01-12T17:05:08Z</dc:date>
    </item>
    <item>
      <title>Re: Adjusting CLUSTER_CREDITS</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/adjusting-cluster-credits/m-p/3925652#M81124</link>
      <description>Jack,&lt;BR /&gt;&lt;BR /&gt;  Always keep in mind that most SYSGEN defaults were formulated quite some time ago, when resource costs were considerably higher than they are today.&lt;BR /&gt;&lt;BR /&gt;  Each buffer is SCSMAXMSG bytes in size (or maybe the default buffer size of the interconnect device?). You have CLUSTER_CREDITS * MAX(SCSMAXMSG,device buffer size) * Number-of-other-cluster-nodes bytes allocated in non-paged-pool. In your case it's only one other node, so the overhead probably isn't too significant, and you probably have orders of magnitude more memory than was expected when the default value was chosen.&lt;BR /&gt;&lt;BR /&gt;  Since you don't have much system downtime, if you're concerned about potential NPAGEDYN issues, start with (say) 60 and keep an eye on NPAGEDYN consumption. &lt;BR /&gt;&lt;BR /&gt;  If you've got a decent amount of memory, don't spend too much time worrying about it, just crank CLUSTER_CREDITS up to maximum (128), and add the line to MODPARAMS:&lt;BR /&gt;&lt;BR /&gt;ADD_NPAGEDYN=CLUSTER_CREDITS*SCSMAXMSG&lt;BR /&gt;&lt;BR /&gt;  Also note that you only need to increase the node that's experiencing credit waits. There is no need for the parameters to be equal on both nodes.</description>
      <pubDate>Sun, 14 Jan 2007 19:24:34 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/adjusting-cluster-credits/m-p/3925652#M81124</guid>
      <dc:creator>John Gillings</dc:creator>
      <dc:date>2007-01-14T19:24:34Z</dc:date>
    </item>
  </channel>
</rss>

