<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: Cluster switching fail in Operating System - HP-UX</title>
    <link>https://community.hpe.com/t5/operating-system-hp-ux/cluster-switching-fail/m-p/4430366#M561256</link>
    <description>Well I would first halt the package and THEN try to start it on the other node, as it appears you did not do this.&lt;BR /&gt;Also, I suggest yu investigate your network as it looks like you have some issues:&lt;BR /&gt;&lt;BR /&gt;NODE STATUS STATE&lt;BR /&gt;dev001 up running&lt;BR /&gt;&lt;BR /&gt;Network_Parameters:&lt;BR /&gt;INTERFACE STATUS PATH NAME&lt;BR /&gt;PRIMARY down 0/0/0 lan0  &amp;lt;&amp;lt;&amp;lt;&amp;lt;&amp;lt;&amp;lt;&amp;lt;&lt;BR /&gt;STANDBY up 0/1/0 lan1&lt;BR /&gt;&lt;BR /&gt;NODE STATUS STATE&lt;BR /&gt;dev002 up running&lt;BR /&gt;&lt;BR /&gt;Network_Parameters:&lt;BR /&gt;INTERFACE STATUS PATH NAME&lt;BR /&gt;PRIMARY up 0/0/0 lan0&lt;BR /&gt;STANDBY down 1/2/0 lan1    &amp;lt;&amp;lt;&amp;lt;&amp;lt;&amp;lt;&amp;lt;&amp;lt;</description>
    <pubDate>Mon, 01 Jun 2009 15:00:22 GMT</pubDate>
    <dc:creator>melvyn burnard</dc:creator>
    <dc:date>2009-06-01T15:00:22Z</dc:date>
    <item>
      <title>Cluster switching fail</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/cluster-switching-fail/m-p/4430364#M561254</link>
      <description>Hi ,&lt;BR /&gt;&lt;BR /&gt;in my cluster environment we are trying to switch a package to primary server.&lt;BR /&gt;&lt;BR /&gt;The package is running on alternative failover node.Here i'm posting some of the outputs . can anyone please point me in right direction.&lt;BR /&gt;&lt;BR /&gt;dev001 root# cmviewcl -v&lt;BR /&gt;&lt;BR /&gt;CLUSTER      STATUS&lt;BR /&gt;MDDB         up&lt;BR /&gt;&lt;BR /&gt;  NODE         STATUS       STATE&lt;BR /&gt;  dev001      up           running&lt;BR /&gt;&lt;BR /&gt;    Network_Parameters:&lt;BR /&gt;    INTERFACE    STATUS       PATH         NAME&lt;BR /&gt;    PRIMARY      down         0/0/0        lan0&lt;BR /&gt;    STANDBY      up           0/1/0        lan1&lt;BR /&gt;&lt;BR /&gt;  NODE         STATUS       STATE&lt;BR /&gt;  dev002      up           running&lt;BR /&gt;&lt;BR /&gt;    Network_Parameters:&lt;BR /&gt;    INTERFACE    STATUS       PATH         NAME&lt;BR /&gt;    PRIMARY      up           0/0/0        lan0&lt;BR /&gt;    STANDBY      down         1/2/0        lan1&lt;BR /&gt;&lt;BR /&gt;    PACKAGE      STATUS       STATE        AUTO_RUN     NODE&lt;BR /&gt;    MDDB         up           running      enabled      dev002&lt;BR /&gt;&lt;BR /&gt;      Policy_Parameters:&lt;BR /&gt;      POLICY_NAME     CONFIGURED_VALUE&lt;BR /&gt;      Failover        configured_node&lt;BR /&gt;      Failback        manual&lt;BR /&gt;&lt;BR /&gt;      Script_Parameters:&lt;BR /&gt;      ITEM       STATUS   MAX_RESTARTS  RESTARTS   NAME&lt;BR /&gt;      Subnet     up                                12.10.10.0&lt;BR /&gt;&lt;BR /&gt;      Node_Switching_Parameters:&lt;BR /&gt;      NODE_TYPE    STATUS       SWITCHING    NAME&lt;BR /&gt;      Primary      up           enabled      dev001&lt;BR /&gt;      Alternate    up           enabled      dev002      (current)&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;May 29 08:19:20 dev001 CM-CMD[12837]: cmrunpkg -n dev001 MDDB&lt;BR /&gt;May 29 08:19:20 dev001 cmcld: Executing '/etc/cmcluster/oracle/control.sh  star               t' for package MDDB, as service PKG*16385.&lt;BR /&gt;May 29 08:19:20 dev001 LVM[12851]: vgchange -a n vgapp&lt;BR /&gt;May 29 08:19:20 dev001 LVM[12854]: vgchange -a n vgdata1&lt;BR /&gt;May 29 08:19:20 dev001 LVM[12857]: vgchange -a n vgdata2&lt;BR /&gt;May 29 08:19:20 dev001 LVM[12860]: vgchange -a n vgdata3&lt;BR /&gt;May 29 08:19:20 dev001 LVM[12863]: vgchange -a n vgdata4&lt;BR /&gt;May 29 08:19:28 dev001 cmcld: Processing exit status for service PKG*16385&lt;BR /&gt;May 29 08:19:28 dev001 cmcld: Service PKG*16385 terminated due to an exit(1).&lt;BR /&gt;May 29 08:19:28 dev001 cmcld: Package MDDB run script exited with NO_RESTART.&lt;BR /&gt;May 29 08:19:28 dev001 cmcld: Examine the file /etc/cmcluster/oracle/control.sh               .log for more details.&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;        ########### Node "dev001": Starting package at Fri May 29 08:18:02 GMT 2009 ###########&lt;BR /&gt;May 29 08:18:02 - "dev001": Activating volume group vgapp with exclusive option.&lt;BR /&gt;vgchange: Activation of volume group "/dev/vgapp" denied by another node in the cluster.&lt;BR /&gt;Request on this system conflicts with Activation Mode on remote system.&lt;BR /&gt;        ERROR:  Function activate_volume_group&lt;BR /&gt;        ERROR:  Failed to activate vgapp&lt;BR /&gt;May 29 08:18:02 - Node "dev001": Deactivating volume group vgapp&lt;BR /&gt;vgchange: Volume group "vgapp" has been successfully changed.&lt;BR /&gt;May 29 08:18:02 - Node "dev001": Deactivating volume group vgdata1&lt;BR /&gt;vgchange: Volume group "vgdata1" has been successfully changed.&lt;BR /&gt;May 29 08:18:02 - Node "dev001": Deactivating volume group vgdata2&lt;BR /&gt;vgchange: Volume group "vgdata2" has been successfully changed.&lt;BR /&gt;May 29 08:18:02 - Node "dev001": Deactivating volume group vgdata3&lt;BR /&gt;vgchange: Volume group "vgdata3" has been successfully changed.&lt;BR /&gt;May 29 08:18:02 - Node "dev001": Deactivating volume group vgdata4&lt;BR /&gt;vgchange: Volume group "vgdata4" has been successfully changed.&lt;BR /&gt;&lt;BR /&gt;        ########### Node "dev001": Starting package at Fri May 29 08:19:20 GMT 2009 ###########&lt;BR /&gt;May 29 08:19:20 - "dev001": Activating volume group vgapp with exclusive option.&lt;BR /&gt;vgchange: Activation of volume group "/dev/vgapp" denied by another node in the cluster.&lt;BR /&gt;Request on this system conflicts with Activation Mode on remote system.&lt;BR /&gt;        ERROR:  Function activate_volume_group&lt;BR /&gt;        ERROR:  Failed to activate vgapp&lt;BR /&gt;May 29 08:19:20 - Node "dev001": Deactivating volume group vgapp&lt;BR /&gt;vgchange: Volume group "vgapp" has been successfully changed.&lt;BR /&gt;May 29 08:19:20 - Node "dev001": Deactivating volume group vgdata1&lt;BR /&gt;vgchange: Volume group "vgdata1" has been successfully changed.&lt;BR /&gt;May 29 08:19:20 - Node "dev001": Deactivating volume group vgdata2&lt;BR /&gt;vgchange: Volume group "vgdata2" has been successfully changed.&lt;BR /&gt;May 29 08:19:20 - Node "dev001": Deactivating volume group vgdata3&lt;BR /&gt;vgchange: Volume group "vgdata3" has been successfully changed.&lt;BR /&gt;May 29 08:19:20 - Node "dev001": Deactivating volume group vgdata4&lt;BR /&gt;vgchange: Volume group "vgdata4" has been successfully changed.&lt;BR /&gt;&lt;BR /&gt;        ########### Node "dev001": Starting package at Fri May 29 08:20:36 GMT 2009 ###########&lt;BR /&gt;May 29 08:20:36 - "dev001": Activating volume group vgapp with exclusive option.&lt;BR /&gt;vgchange: Activation of volume group "/dev/vgapp" denied by another node in the cluster.&lt;BR /&gt;Request on this system conflicts with Activation Mode on remote system.&lt;BR /&gt;        ERROR:  Function activate_volume_group&lt;BR /&gt;        ERROR:  Failed to activate vgapp&lt;BR /&gt;May 29 08:20:36 - Node "dev001": Deactivating volume group vgapp&lt;BR /&gt;vgchange: Volume group "vgapp" has been successfully changed.&lt;BR /&gt;May 29 08:20:36 - Node "dev001": Deactivating volume group vgdata1&lt;BR /&gt;vgchange: Volume group "vgdata1" has been successfully changed.&lt;BR /&gt;May 29 08:20:36 - Node "dev001": Deactivating volume group vgdata2&lt;BR /&gt;vgchange: Volume group "vgdata2" has been successfully changed.&lt;BR /&gt;May 29 08:20:36 - Node "dev001": Deactivating volume group vgdata3&lt;BR /&gt;vgchange: Volume group "vgdata3" has been successfully changed.&lt;BR /&gt;May 29 08:20:36 - Node "dev001": Deactivating volume group vgdata4&lt;BR /&gt;vgchange: Volume group "vgdata4" has been successfully changed.&lt;BR /&gt;dev001 root#&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Mon, 01 Jun 2009 14:24:09 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/cluster-switching-fail/m-p/4430364#M561254</guid>
      <dc:creator>hp omni backup</dc:creator>
      <dc:date>2009-06-01T14:24:09Z</dc:date>
    </item>
    <item>
      <title>Re: Cluster switching fail</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/cluster-switching-fail/m-p/4430365#M561255</link>
      <description>Shalom,&lt;BR /&gt;&lt;BR /&gt;The second or other node is not permitting volume group activation in exclusive mode.&lt;BR /&gt;&lt;BR /&gt;There may be error data on the second node, or you may wish to run cmhaltnode and bring that node down and try again on this node.&lt;BR /&gt;&lt;BR /&gt;vgchange: Activation of volume group "/dev/vgapp" denied by another node in the cluster.&lt;BR /&gt;&lt;BR /&gt;SEP</description>
      <pubDate>Mon, 01 Jun 2009 14:45:44 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/cluster-switching-fail/m-p/4430365#M561255</guid>
      <dc:creator>Steven E. Protter</dc:creator>
      <dc:date>2009-06-01T14:45:44Z</dc:date>
    </item>
    <item>
      <title>Re: Cluster switching fail</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/cluster-switching-fail/m-p/4430366#M561256</link>
      <description>Well I would first halt the package and THEN try to start it on the other node, as it appears you did not do this.&lt;BR /&gt;Also, I suggest yu investigate your network as it looks like you have some issues:&lt;BR /&gt;&lt;BR /&gt;NODE STATUS STATE&lt;BR /&gt;dev001 up running&lt;BR /&gt;&lt;BR /&gt;Network_Parameters:&lt;BR /&gt;INTERFACE STATUS PATH NAME&lt;BR /&gt;PRIMARY down 0/0/0 lan0  &amp;lt;&amp;lt;&amp;lt;&amp;lt;&amp;lt;&amp;lt;&amp;lt;&lt;BR /&gt;STANDBY up 0/1/0 lan1&lt;BR /&gt;&lt;BR /&gt;NODE STATUS STATE&lt;BR /&gt;dev002 up running&lt;BR /&gt;&lt;BR /&gt;Network_Parameters:&lt;BR /&gt;INTERFACE STATUS PATH NAME&lt;BR /&gt;PRIMARY up 0/0/0 lan0&lt;BR /&gt;STANDBY down 1/2/0 lan1    &amp;lt;&amp;lt;&amp;lt;&amp;lt;&amp;lt;&amp;lt;&amp;lt;</description>
      <pubDate>Mon, 01 Jun 2009 15:00:22 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/cluster-switching-fail/m-p/4430366#M561256</guid>
      <dc:creator>melvyn burnard</dc:creator>
      <dc:date>2009-06-01T15:00:22Z</dc:date>
    </item>
    <item>
      <title>Re: Cluster switching fail</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/cluster-switching-fail/m-p/4430367#M561257</link>
      <description>Your answer is right there in the cluster message:&lt;BR /&gt;########### Node "dev001": Starting package at Fri May 29 08:20:36 GMT 2009 ###########&lt;BR /&gt;May 29 08:20:36 - "dev001": Activating volume group vgapp with exclusive option.&lt;BR /&gt;vgchange: Activation of volume group "/dev/vgapp" denied by another node in the cluster.&lt;BR /&gt;Request on this system conflicts with Activation Mode on remote system.&lt;BR /&gt;&lt;BR /&gt;&amp;gt;&amp;gt;&amp;gt;&amp;gt;&amp;gt;&amp;gt;&amp;gt;&amp;gt;&amp;gt;&amp;gt;It's already running on the other node.  So follow Melvyn's suggestions - cmhaltpkg on the second node and then you could just run cmmodpkg -e command to force it up on the first node.  But, again just like Melvyn told you, check out your lan connections cause looks like you got issues there.&lt;BR /&gt;&lt;BR /&gt;Rgrds,&lt;BR /&gt;Rita</description>
      <pubDate>Mon, 01 Jun 2009 15:28:05 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/cluster-switching-fail/m-p/4430367#M561257</guid>
      <dc:creator>Rita C Workman</dc:creator>
      <dc:date>2009-06-01T15:28:05Z</dc:date>
    </item>
    <item>
      <title>Re: Cluster switching fail</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/cluster-switching-fail/m-p/4430368#M561258</link>
      <description>&lt;!--!*#--&gt;Well, it can be due to various reasons, but below is what comes to my mind first.&lt;BR /&gt;&lt;BR /&gt;1. The package shutdown failed on the fail over node. What ever caused the package shutdown failure, the vg remained active. Hence you are unable to start it up on primary. &lt;BR /&gt;&lt;BR /&gt;Try to manually unmount file systems mounted from lvols (kill any process using it) in those vgs and do vgchange -a n for each vg on the fail over node. If you get errors trying to unmount a file system, you might have to reboot it.&lt;BR /&gt;&lt;BR /&gt;2. The vg was manually activated on the fail over server during an earlier failed - fail over. This means the package script was not updated correctly.&lt;BR /&gt;&lt;BR /&gt;Still you'll need to unmount file systems/deactivate vgs manually.</description>
      <pubDate>Tue, 02 Jun 2009 03:04:09 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/cluster-switching-fail/m-p/4430368#M561258</guid>
      <dc:creator>Anoop P_2</dc:creator>
      <dc:date>2009-06-02T03:04:09Z</dc:date>
    </item>
    <item>
      <title>Re: Cluster switching fail</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/cluster-switching-fail/m-p/4430369#M561259</link>
      <description>As Melwin pointed out,&lt;BR /&gt;&lt;BR /&gt;your error message is very clear&lt;BR /&gt;in node001&lt;BR /&gt;&lt;BR /&gt;PRIMAR LAN0 is DOWN&lt;BR /&gt;check our LAN connections.&lt;BR /&gt;&lt;BR /&gt;what does lanscan and ioscan -fnkClan show.&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Tue, 02 Jun 2009 05:10:33 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/cluster-switching-fail/m-p/4430369#M561259</guid>
      <dc:creator>Syed Nazer Abbas</dc:creator>
      <dc:date>2009-06-02T05:10:33Z</dc:date>
    </item>
    <item>
      <title>Re: Cluster switching fail</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/cluster-switching-fail/m-p/4430370#M561260</link>
      <description>Thank you all for your support.&lt;BR /&gt;&lt;BR /&gt;I need to do this cluster switching on weekend.&lt;BR /&gt;&lt;BR /&gt;As you all guided i have got the following out put when i did lanscan and ioscan -fnkClan&lt;BR /&gt;&lt;BR /&gt;dev001 root#lanscan&lt;BR /&gt;Hardware Station        Crd Hdw   Net-Interface  NM  MAC       HP-DLPI DLPI&lt;BR /&gt;Path     Address        In# State NamePPA        ID  Type      Support Mjr#&lt;BR /&gt;0/0/0    0x001083F7B33A 0   UP    lan0 snap0     1   ETHER     Yes     119&lt;BR /&gt;0/1/0    0x001083F7B3BB 1   UP    lan1 snap1     2   ETHER     Yes     119&lt;BR /&gt;dev001 root# ioscan -fnkClan&lt;BR /&gt;Class     I  H/W Path  Driver S/W State   H/W Type     Description&lt;BR /&gt;===================================================================&lt;BR /&gt;lan       0  0/0/0     btlan6 CLAIMED     INTERFACE    HP A3738A PCI 10/100Base-                   TX Ultimate Combo&lt;BR /&gt;                      /dev/diag/lan0  /dev/ether0     /dev/lan0&lt;BR /&gt;lan       1  0/1/0     btlan6 CLAIMED     INTERFACE    HP A3738A PCI 10/100Base-                   TX Ultimate Combo&lt;BR /&gt;                      /dev/diag/lan1  /dev/ether1     /dev/lan1&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Tue, 02 Jun 2009 18:12:05 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/cluster-switching-fail/m-p/4430370#M561260</guid>
      <dc:creator>hp omni backup</dc:creator>
      <dc:date>2009-06-02T18:12:05Z</dc:date>
    </item>
  </channel>
</rss>

