<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: system_multi_node is low performance when running at the same time in Operating System - HP-UX</title>
    <link>https://community.hpe.com/t5/operating-system-hp-ux/system-multi-node-is-low-performance-when-running-at-the-same/m-p/6977248#M490401</link>
    <description>&lt;P&gt;Does your application frequently update the same file, or even the same blocks of the same file, from both nodes? This kind of a "hot block" or "hot file" is one of the trickiest cases for a cluster filesystem.&lt;/P&gt;&lt;P&gt;If the exactness of the modification times of the files is not important to you, mounting the filesystem with the &lt;STRONG&gt;nomtime&lt;/STRONG&gt; option might be helpful. This reduces the level of inter-node coordination the cluster filesystem needs to do, at the price that the file modification timestamps seen on one node may not accurately reflect that the other node has modified the data, until at least 60 seconds has passed from the modification.&lt;/P&gt;</description>
    <pubDate>Tue, 19 Sep 2017 13:08:22 GMT</pubDate>
    <dc:creator>Matti_Kurkela</dc:creator>
    <dc:date>2017-09-19T13:08:22Z</dc:date>
    <item>
      <title>system_multi_node is low performance when running at the same time</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/system-multi-node-is-low-performance-when-running-at-the-same/m-p/6976574#M490394</link>
      <description>&lt;P&gt;Hi,&lt;/P&gt;&lt;P&gt;I have issue with my&amp;nbsp;&lt;SPAN&gt;Serviceguar cluster of 2 node. When 2 node run at the same time but just only node was used by users and the other is not configured for users use it, the performance of the disk is very high ( ~~ 80% - 100% ) but when i halt one of them , it is work fine.&amp;nbsp;&lt;BR /&gt;Can you explain it for me ?&amp;nbsp;&lt;BR /&gt;Thank you !&lt;BR /&gt;This is config file and status :&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;Config 2 Node :&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;PACKAGE_NAME SG-CFS-pkg&lt;/P&gt;&lt;P&gt;PACKAGE_TYPE SYSTEM_MULTI_NODE&lt;/P&gt;&lt;P&gt;FAILOVER_POLICY CONFIGURED_NODE&lt;/P&gt;&lt;P&gt;FAILBACK_POLICY AUTOMATIC&lt;/P&gt;&lt;P&gt;NODE_NAME *&lt;/P&gt;&lt;P&gt;AUTO_RUN YES&lt;/P&gt;&lt;P&gt;LOCAL_LAN_FAILOVER_ALLOWED YES&lt;/P&gt;&lt;P&gt;NODE_FAIL_FAST_ENABLED YES&lt;/P&gt;&lt;P&gt;RUN_SCRIPT /etc/cmcluster/cfs/SG-CFS-pkg.sh&lt;BR /&gt;RUN_SCRIPT_TIMEOUT 300&lt;BR /&gt;HALT_SCRIPT /etc/cmcluster/cfs/SG-CFS-pkg.sh&lt;BR /&gt;HALT_SCRIPT_TIMEOUT 300&lt;BR /&gt;SCRIPT_LOG_FILE /etc/cmcluster/cfs/SG-CFS-pkg.log&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;# vx monitor of vxconfigd daemon&lt;/P&gt;&lt;P&gt;SERVICE_NAME SG-CFS-vxconfigd&lt;BR /&gt;SERVICE_FAIL_FAST_ENABLED yes&lt;BR /&gt;SERVICE_HALT_TIMEOUT 5&lt;/P&gt;&lt;P&gt;# Serviceguard configuration monitor&lt;/P&gt;&lt;P&gt;SERVICE_NAME SG-CFS-sgcvmd&lt;BR /&gt;SERVICE_FAIL_FAST_ENABLED yes&lt;BR /&gt;SERVICE_HALT_TIMEOUT 5&lt;/P&gt;&lt;P&gt;# Filesystem check daemon monitor&lt;/P&gt;&lt;P&gt;SERVICE_NAME SG-CFS-vxfsckd&lt;BR /&gt;SERVICE_FAIL_FAST_ENABLED yes&lt;BR /&gt;SERVICE_HALT_TIMEOUT 30&lt;/P&gt;&lt;P&gt;# vx membership coordination daemon&lt;/P&gt;&lt;P&gt;SERVICE_NAME SG-CFS-cmvxd&lt;BR /&gt;SERVICE_FAIL_FAST_ENABLED yes&lt;BR /&gt;SERVICE_HALT_TIMEOUT 5&lt;/P&gt;&lt;P&gt;# vx ping daemon&lt;/P&gt;&lt;P&gt;SERVICE_NAME SG-CFS-cmvxpingd&lt;BR /&gt;SERVICE_FAIL_FAST_ENABLED yes&lt;BR /&gt;SERVICE_HALT_TIMEOUT 5&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;MULTI_NODE_PACKAGES&lt;/P&gt;&lt;P&gt;PACKAGE STATUS STATE AUTO_RUN SYSTEM&lt;BR /&gt;SG-CFS-pkg up (1/2) running enabled yes&lt;BR /&gt;&lt;BR /&gt;NODE_NAME STATUS SWITCHING&lt;BR /&gt;app1 down&lt;BR /&gt;&lt;BR /&gt;Script_Parameters:&lt;BR /&gt;ITEM STATUS MAX_RESTARTS RESTARTS NAME&lt;BR /&gt;Service down Unlimited 0 SG-CFS-vxconfigd&lt;BR /&gt;Service down Unlimited 0 SG-CFS-sgcvmd&lt;BR /&gt;Service down Unlimited 0 SG-CFS-vxfsckd&lt;BR /&gt;Service down Unlimited 0 SG-CFS-cmvxd&lt;BR /&gt;Service down Unlimited 0 SG-CFS-cmvxpingd&lt;BR /&gt;&lt;BR /&gt;NODE_NAME STATUS SWITCHING&lt;BR /&gt;app2 up enabled&lt;BR /&gt;&lt;BR /&gt;Script_Parameters:&lt;BR /&gt;ITEM STATUS MAX_RESTARTS RESTARTS NAME&lt;BR /&gt;Service up 0 0 SG-CFS-vxconfigd&lt;BR /&gt;Service up 5 0 SG-CFS-sgcvmd&lt;BR /&gt;Service up 5 0 SG-CFS-vxfsckd&lt;BR /&gt;Service up 0 0 SG-CFS-cmvxd&lt;BR /&gt;Service up 0 0 SG-CFS-cmvxpingd&lt;BR /&gt;&lt;BR /&gt;Other_Attributes:&lt;BR /&gt;ATTRIBUTE_NAME ATTRIBUTE_VALUE&lt;BR /&gt;Style legacy&lt;BR /&gt;Priority no_priority&lt;/P&gt;&lt;P&gt;PACKAGE STATUS STATE AUTO_RUN SYSTEM&lt;BR /&gt;SG-CFS-DG-1 up (1/2) running enabled no&lt;BR /&gt;&lt;BR /&gt;NODE_NAME STATUS STATE SWITCHING&lt;BR /&gt;app1 down unknown&lt;BR /&gt;&lt;BR /&gt;Dependency_Parameters:&lt;BR /&gt;DEPENDENCY_NAME SATISFIED&lt;BR /&gt;SG-CFS-pkg no&lt;BR /&gt;&lt;BR /&gt;NODE_NAME STATUS STATE SWITCHING&lt;BR /&gt;app2 up running enabled&lt;BR /&gt;&lt;BR /&gt;Dependency_Parameters:&lt;BR /&gt;DEPENDENCY_NAME SATISFIED&lt;BR /&gt;SG-CFS-pkg yes&lt;BR /&gt;&lt;BR /&gt;Other_Attributes:&lt;BR /&gt;ATTRIBUTE_NAME ATTRIBUTE_VALUE&lt;BR /&gt;Style legacy&lt;BR /&gt;Priority no_priority&lt;/P&gt;&lt;P&gt;PACKAGE STATUS STATE AUTO_RUN SYSTEM&lt;BR /&gt;SG-CFS-MP-1 up (1/2) running enabled no&lt;BR /&gt;&lt;BR /&gt;NODE_NAME STATUS STATE SWITCHING&lt;BR /&gt;app1 down unknown&lt;BR /&gt;&lt;BR /&gt;Dependency_Parameters:&lt;BR /&gt;DEPENDENCY_NAME SATISFIED&lt;BR /&gt;SG-CFS-DG-1 no&lt;BR /&gt;&lt;BR /&gt;NODE_NAME STATUS STATE SWITCHING&lt;BR /&gt;app2 up running enabled&lt;BR /&gt;&lt;BR /&gt;Dependency_Parameters:&lt;BR /&gt;DEPENDENCY_NAME SATISFIED&lt;BR /&gt;SG-CFS-DG-1 yes&lt;BR /&gt;&lt;BR /&gt;Other_Attributes:&lt;BR /&gt;ATTRIBUTE_NAME ATTRIBUTE_VALUE&lt;BR /&gt;Style legacy&lt;BR /&gt;Priority no_priority&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Thu, 14 Sep 2017 16:14:37 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/system-multi-node-is-low-performance-when-running-at-the-same/m-p/6976574#M490394</guid>
      <dc:creator>LeThien</dc:creator>
      <dc:date>2017-09-14T16:14:37Z</dc:date>
    </item>
    <item>
      <title>Re: system_multi_node is low performance when running at the same time</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/system-multi-node-is-low-performance-when-running-at-the-same/m-p/6976831#M490395</link>
      <description>&lt;P&gt;ServiceGuard doesn't perform disk I/O. It is process and resource coordinator. The disk load is coming from processes that are running on each system. Use Glance to determine which processes are using the most disk I/O. Use the o command in Glance to sort the processes by disk.&lt;/P&gt;</description>
      <pubDate>Sun, 17 Sep 2017 02:21:59 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/system-multi-node-is-low-performance-when-running-at-the-same/m-p/6976831#M490395</guid>
      <dc:creator>Bill Hassell</dc:creator>
      <dc:date>2017-09-17T02:21:59Z</dc:date>
    </item>
    <item>
      <title>Re: system_multi_node is low performance when running at the same time</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/system-multi-node-is-low-performance-when-running-at-the-same/m-p/6977152#M490398</link>
      <description>&lt;P&gt;I monitored them with glance but everything is nomal when i running in one node. But when i run two node at the same time, the disk i/o is so high and very slow&lt;/P&gt;</description>
      <pubDate>Mon, 18 Sep 2017 16:14:57 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/system-multi-node-is-low-performance-when-running-at-the-same/m-p/6977152#M490398</guid>
      <dc:creator>LeThien</dc:creator>
      <dc:date>2017-09-18T16:14:57Z</dc:date>
    </item>
    <item>
      <title>Re: system_multi_node is low performance when running at the same time</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/system-multi-node-is-low-performance-when-running-at-the-same/m-p/6977153#M490399</link>
      <description>What are the names of the processes that use the most disk I/O?</description>
      <pubDate>Mon, 18 Sep 2017 16:39:59 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/system-multi-node-is-low-performance-when-running-at-the-same/m-p/6977153#M490399</guid>
      <dc:creator>Bill Hassell</dc:creator>
      <dc:date>2017-09-18T16:39:59Z</dc:date>
    </item>
    <item>
      <title>Re: system_multi_node is low performance when running at the same time</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/system-multi-node-is-low-performance-when-running-at-the-same/m-p/6977154#M490397</link>
      <description>&lt;P&gt;You might consider posting this over in the HP-UX System Adminstration community &lt;A href="https://community.hpe.com/t5/System-Administration/bd-p/itrc-156#.Wb_4fdVSxaQ" target="_blank"&gt;https://community.hpe.com/t5/System-Administration/bd-p/itrc-156#.Wb_4fdVSxaQ&lt;/A&gt; as well since this is very unlikely to be an actual Serviceguard problem. Serviceguard is most likely just the trigger that activates the CFS on multiple nodes, which is the point where you actually begin to have IO performance problems.&lt;/P&gt;&lt;P&gt;I would check to make sure you are up to date on the relevant HP-UX kernel CFS and CVM VRTS type patches for your operating system (PHKL_XXXXX. Also, it is proper to ask "What changed?" Was a new filesystem added? Mount options changed? New node added to the cluster? Patches? Other?&lt;/P&gt;</description>
      <pubDate>Mon, 18 Sep 2017 16:49:53 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/system-multi-node-is-low-performance-when-running-at-the-same/m-p/6977154#M490397</guid>
      <dc:creator>Mike_Chisholm</dc:creator>
      <dc:date>2017-09-18T16:49:53Z</dc:date>
    </item>
    <item>
      <title>Re: system_multi_node is low performance when running at the same time</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/system-multi-node-is-low-performance-when-running-at-the-same/m-p/6977248#M490401</link>
      <description>&lt;P&gt;Does your application frequently update the same file, or even the same blocks of the same file, from both nodes? This kind of a "hot block" or "hot file" is one of the trickiest cases for a cluster filesystem.&lt;/P&gt;&lt;P&gt;If the exactness of the modification times of the files is not important to you, mounting the filesystem with the &lt;STRONG&gt;nomtime&lt;/STRONG&gt; option might be helpful. This reduces the level of inter-node coordination the cluster filesystem needs to do, at the price that the file modification timestamps seen on one node may not accurately reflect that the other node has modified the data, until at least 60 seconds has passed from the modification.&lt;/P&gt;</description>
      <pubDate>Tue, 19 Sep 2017 13:08:22 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/system-multi-node-is-low-performance-when-running-at-the-same/m-p/6977248#M490401</guid>
      <dc:creator>Matti_Kurkela</dc:creator>
      <dc:date>2017-09-19T13:08:22Z</dc:date>
    </item>
  </channel>
</rss>

