<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: Deleteing a node from a running SG cluster in Operating System - HP-UX</title>
    <link>https://community.hpe.com/t5/operating-system-hp-ux/deleteing-a-node-from-a-running-sg-cluster/m-p/5156747#M671422</link>
    <description>Finally I worked it out with all these errors.&lt;BR /&gt;The lan errors were because in my cluster ascii file the heartbeat had switched from lan0 to lan8 and changing the number it dissapeared.&lt;BR /&gt;The disks errors, were because I forgot to remove the vg the other package owned from that same file, and commeting the lines solved it ou as well (I made an vgexport as well just in case).&lt;BR /&gt;And now the last thing in order to work fine is that I have to change my cluster lock vg, as it was one that was on the node I want to delete from the cluster.&lt;BR /&gt;With just changing that vg to another one that my active node has, would work out?&lt;BR /&gt;The cmcheckconf command just tells me that I can not modify it in a running cluster (I know I have to stop it, but will it work just like that?). I post as well the error of my last cmcheckconf:&lt;BR /&gt;&lt;BR /&gt;Error: Modifying FIRST_CLUSTER_LOCK_VG value from /dev/vg01 to /dev/vg03 while c&lt;BR /&gt;luster tizona is running is not supported. &lt;BR /&gt;&lt;BR /&gt;Thanks a lot</description>
    <pubDate>Fri, 13 Feb 2009 14:21:02 GMT</pubDate>
    <dc:creator>catastro</dc:creator>
    <dc:date>2009-02-13T14:21:02Z</dc:date>
    <item>
      <title>Deleteing a node from a running SG cluster</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/deleteing-a-node-from-a-running-sg-cluster/m-p/5156742#M671417</link>
      <description>Hi folks. I am removing a node from a running SG cluster, and I already was able to delete the package that was on that node and already stopped the node.&lt;BR /&gt;I am looking to the Managing Service Guard manual and didn't get the point to do it.&lt;BR /&gt;I get several errors while doing the the cmcheckconf and/or cmgetconf commands.&lt;BR /&gt;It tells me things about:&lt;BR /&gt;&lt;BR /&gt;Error: Unable to determine a unique identifier for physical volume /dev/dsk/c24t&lt;BR /&gt;1d2 on node ensnada2. Use pvcreate to give the disk an identifier.&lt;BR /&gt;&lt;BR /&gt;ensnada2 is my active node (the one I want to keep on the cluster for now). and all the disk that are appearing in the messages are in use at this moment in different vg of the package that this node owns.&lt;BR /&gt;In the file I attach, the are the outputs for thecmcheckconf, cmgetconf and cmquerycl commands (talks about the same errors), and ioscan -fnCdisk of the active node and the /etc/lvmtab.&lt;BR /&gt;I want to do this as we are moving from ServiceGuard to Oracle RAC (client asked for it). This is the state of my SG cluster now:&lt;BR /&gt;&lt;BR /&gt;ensnada2:/etc/cmcluster#cmviewcl&lt;BR /&gt;&lt;BR /&gt;CLUSTER      STATUS       &lt;BR /&gt;tizona       up           &lt;BR /&gt;&lt;BR /&gt;  NODE         STATUS       STATE        &lt;BR /&gt;  ensnada1     down         halted       &lt;BR /&gt;  ensnada2     up           running      &lt;BR /&gt;&lt;BR /&gt;    PACKAGE      STATUS       STATE        AUTO_RUN     NODE         &lt;BR /&gt;    Oracle2      up           running      disabled     ensnada2     &lt;BR /&gt;&lt;BR /&gt;Any ideas/help would be greatfull. Thanks a lot</description>
      <pubDate>Thu, 12 Feb 2009 17:22:13 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/deleteing-a-node-from-a-running-sg-cluster/m-p/5156742#M671417</guid>
      <dc:creator>catastro</dc:creator>
      <dc:date>2009-02-12T17:22:13Z</dc:date>
    </item>
    <item>
      <title>Re: Deleteing a node from a running SG cluster</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/deleteing-a-node-from-a-running-sg-cluster/m-p/5156743#M671418</link>
      <description>Shalom,&lt;BR /&gt;&lt;BR /&gt;On the remaining active node:&lt;BR /&gt;&lt;BR /&gt;cmquerycl&lt;BR /&gt;cmcheckconf&lt;BR /&gt;cmapplyconf&lt;BR /&gt;&lt;BR /&gt;Resolve the disk issue prior to this as indicated in the error message.&lt;BR /&gt;&lt;BR /&gt;SEP</description>
      <pubDate>Thu, 12 Feb 2009 17:27:57 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/deleteing-a-node-from-a-running-sg-cluster/m-p/5156743#M671418</guid>
      <dc:creator>Steven E. Protter</dc:creator>
      <dc:date>2009-02-12T17:27:57Z</dc:date>
    </item>
    <item>
      <title>Re: Deleteing a node from a running SG cluster</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/deleteing-a-node-from-a-running-sg-cluster/m-p/5156744#M671419</link>
      <description>Hi SEP. I already noticed all that, but thanks although. Tha thing is, those disks are actives in the node I want to keep on my SG cluster and still gets the errors.&lt;BR /&gt;Wh and how to solve this is what I want to know. If I make the pvcreate on those disks, I am going to loose all the information that is on those disks, isn't it?&lt;BR /&gt;Again, thanks although</description>
      <pubDate>Thu, 12 Feb 2009 17:35:25 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/deleteing-a-node-from-a-running-sg-cluster/m-p/5156744#M671419</guid>
      <dc:creator>catastro</dc:creator>
      <dc:date>2009-02-12T17:35:25Z</dc:date>
    </item>
    <item>
      <title>Re: Deleteing a node from a running SG cluster</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/deleteing-a-node-from-a-running-sg-cluster/m-p/5156745#M671420</link>
      <description>My question would be in reference to something you mention -&lt;BR /&gt;&lt;BR /&gt;the disks are in use on the active node in another volume group.   hmmm...&lt;BR /&gt;&lt;BR /&gt;Wouldn't those be the same disks from the same volume group from the now in-active node?&lt;BR /&gt;Wouldn't you have just 'failed-over' the associated package and have that same volume group/disks running on the second node?&lt;BR /&gt;&lt;BR /&gt;Now the c-t-d- numbers from node to node may change, but the minor number of the volume group, hence the vgid, will not change.&lt;BR /&gt;&lt;BR /&gt;So.......&lt;BR /&gt;Is the volume group minor number the same on both nodes?&lt;BR /&gt;Do you have a mapfile using the -s option showing the vgid number in the mapfile.file and is it the same on both nodes?&lt;BR /&gt;&lt;BR /&gt;Just a couple thoughts,&lt;BR /&gt;Rita</description>
      <pubDate>Thu, 12 Feb 2009 18:54:07 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/deleteing-a-node-from-a-running-sg-cluster/m-p/5156745#M671420</guid>
      <dc:creator>Rita C Workman</dc:creator>
      <dc:date>2009-02-12T18:54:07Z</dc:date>
    </item>
    <item>
      <title>Re: Deleteing a node from a running SG cluster</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/deleteing-a-node-from-a-running-sg-cluster/m-p/5156746#M671421</link>
      <description>Hi Rita. Could all this be because my boss, as soon as I deleted the package configuration (the one that was in the node I want to delete) and shutted down the node from the cluster, he did a vgexport of all the vgs on that node except the vg00 and vg02 (swap vg).&lt;BR /&gt;&lt;BR /&gt;Now, how could I solve this issue without loosing any data? Thanks a lot to both of you.</description>
      <pubDate>Fri, 13 Feb 2009 10:22:09 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/deleteing-a-node-from-a-running-sg-cluster/m-p/5156746#M671421</guid>
      <dc:creator>catastro</dc:creator>
      <dc:date>2009-02-13T10:22:09Z</dc:date>
    </item>
    <item>
      <title>Re: Deleteing a node from a running SG cluster</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/deleteing-a-node-from-a-running-sg-cluster/m-p/5156747#M671422</link>
      <description>Finally I worked it out with all these errors.&lt;BR /&gt;The lan errors were because in my cluster ascii file the heartbeat had switched from lan0 to lan8 and changing the number it dissapeared.&lt;BR /&gt;The disks errors, were because I forgot to remove the vg the other package owned from that same file, and commeting the lines solved it ou as well (I made an vgexport as well just in case).&lt;BR /&gt;And now the last thing in order to work fine is that I have to change my cluster lock vg, as it was one that was on the node I want to delete from the cluster.&lt;BR /&gt;With just changing that vg to another one that my active node has, would work out?&lt;BR /&gt;The cmcheckconf command just tells me that I can not modify it in a running cluster (I know I have to stop it, but will it work just like that?). I post as well the error of my last cmcheckconf:&lt;BR /&gt;&lt;BR /&gt;Error: Modifying FIRST_CLUSTER_LOCK_VG value from /dev/vg01 to /dev/vg03 while c&lt;BR /&gt;luster tizona is running is not supported. &lt;BR /&gt;&lt;BR /&gt;Thanks a lot</description>
      <pubDate>Fri, 13 Feb 2009 14:21:02 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/deleteing-a-node-from-a-running-sg-cluster/m-p/5156747#M671422</guid>
      <dc:creator>catastro</dc:creator>
      <dc:date>2009-02-13T14:21:02Z</dc:date>
    </item>
    <item>
      <title>Re: Deleteing a node from a running SG cluster</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/deleteing-a-node-from-a-running-sg-cluster/m-p/5156748#M671423</link>
      <description>I reformed the cluster binary file with the cmapplyconf changing the parameters of the first CLUSTER_FIRST_LOCK_VG from vg01 to vg03.&lt;BR /&gt;After that, all went just fine.&lt;BR /&gt;That's for all your help.</description>
      <pubDate>Mon, 16 Feb 2009 16:44:13 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/deleteing-a-node-from-a-running-sg-cluster/m-p/5156748#M671423</guid>
      <dc:creator>catastro</dc:creator>
      <dc:date>2009-02-16T16:44:13Z</dc:date>
    </item>
  </channel>
</rss>

