<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: Howto reestablish cluster lock on running cluster? in Operating System - HP-UX</title>
    <link>https://community.hpe.com/t5/operating-system-hp-ux/howto-reestablish-cluster-lock-on-running-cluster/m-p/3571175#M701085</link>
    <description>Forgot, Rita you mentioned I should as well  settle the amount of packages.&lt;BR /&gt;I've to admid that I left this value at 10 while the cluster currently only hosts 3 packages.&lt;BR /&gt;I thought thus I'd be prepared to online add further packages should ever the need arrise.&lt;BR /&gt;On the other hand I now consider the downside this may have on resource waste.&lt;BR /&gt;</description>
    <pubDate>Mon, 27 Jun 2005 07:26:15 GMT</pubDate>
    <dc:creator>Ralph Grothe</dc:creator>
    <dc:date>2005-06-27T07:26:15Z</dc:date>
    <item>
      <title>Howto reestablish cluster lock on running cluster?</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/howto-reestablish-cluster-lock-on-running-cluster/m-p/3571168#M701078</link>
      <description>Hello,&lt;BR /&gt; &lt;BR /&gt;I yesterday reconfigured this three node cluster.&lt;BR /&gt;  &lt;BR /&gt;Because I read in an HP whitepaper about optimizing failover that even for three and four node clusters, though not compelling to achieve a quorum tie break, that they advise to set up a cluster lock disk (or quorum server) I added one to this new configuration.&lt;BR /&gt; &lt;BR /&gt;Due to pressing time constraints I focused on failover tests after reconfiguration without obviously having given too careful attention to entries from cmclconfd.&lt;BR /&gt;  &lt;BR /&gt;So this morning I discovered these disturbing entries in the cluster master's syslog.log:&lt;BR /&gt; &lt;BR /&gt; &lt;BR /&gt;# grep cmcld /var/adm/syslog/syslog.log|tail -3&lt;BR /&gt;Jun 27 09:36:40 jupiter cmcld: WARNING: Cluster lock on disk /dev/dsk/c7t0d0 is missing!&lt;BR /&gt;Jun 27 09:36:40 jupiter cmcld: Until it is fixed, a single failure could&lt;BR /&gt;Jun 27 09:36:40 jupiter cmcld: cause all nodes in the cluster to crash&lt;BR /&gt;  &lt;BR /&gt; &lt;BR /&gt;While yesterday already these missed entries appeared:&lt;BR /&gt; &lt;BR /&gt; &lt;BR /&gt;Jun 26 11:00:50 jupiter cmclconfd[3970]: Failed to release volume group /dev/vgdat3&lt;BR /&gt;Jun 26 11:00:54 jupiter cmclconfd[3970]: Failed to release volume group /dev/vgdat4&lt;BR /&gt;Jun 26 11:00:54 jupiter cmclconfd[3970]: Failed to release volume group /dev/vgdat5&lt;BR /&gt;Jun 26 11:00:55 jupiter cmclconfd[3970]: Failed to release volume group /dev/vgbz&lt;BR /&gt;Jun 26 11:00:55 jupiter cmclconfd[3970]: Failed to release volume group /dev/vgzlb&lt;BR /&gt;Jun 26 11:01:28 jupiter cmclconfd[3997]: Initializing cluster lock device /dev/dsk/c7t0d0 for node j&lt;BR /&gt;upiter.srz.lit.verwalt-berlin.de&lt;BR /&gt;Jun 26 11:01:29 jupiter cmclconfd[3997]: Unable to initialize cluster lock on /dev/dsk/c7t0d0, Volum&lt;BR /&gt;e Group /dev/vgdat1 is not activated&lt;BR /&gt;Jun 26 11:04:11 jupiter cmclconfd[4051]: Failed to release volume group /dev/vgdat3&lt;BR /&gt;Jun 26 11:04:14 jupiter cmclconfd[4051]: Failed to release volume group /dev/vgdat4&lt;BR /&gt;Jun 26 11:04:15 jupiter cmclconfd[4051]: Failed to release volume group /dev/vgbz&lt;BR /&gt;Jun 26 11:04:15 jupiter cmclconfd[4051]: Failed to release volume group /dev/vgzlb&lt;BR /&gt;Jun 26 11:04:51 jupiter cmclconfd[4055]: Initializing cluster lock device /dev/dsk/c7t0d0 for node j&lt;BR /&gt;upiter.srz.lit.verwalt-berlin.de&lt;BR /&gt; &lt;BR /&gt; &lt;BR /&gt;The cluster binary has as far as lock disk is concerned this contents:&lt;BR /&gt; &lt;BR /&gt; &lt;BR /&gt;# cmviewconf|grep -i -e lock -e node\ name&lt;BR /&gt;   flags:                               12      (single cluster lock)&lt;BR /&gt;   first lock vg name:                  /dev/vgdat1&lt;BR /&gt;   second lock vg name:                 (not configured)&lt;BR /&gt;      Node name:                        jupiter&lt;BR /&gt;      first lock pv name:               /dev/dsk/c7t0d0&lt;BR /&gt;      first lock disk interface type:   fcparray&lt;BR /&gt;      Node name:                        neptun&lt;BR /&gt;      first lock pv name:               /dev/dsk/c7t0d0&lt;BR /&gt;      first lock disk interface type:   fcparray&lt;BR /&gt;      Node name:                        saturn&lt;BR /&gt;      first lock pv name:               /dev/dsk/c7t0d0&lt;BR /&gt;      first lock disk interface type:   fcparray&lt;BR /&gt;&lt;BR /&gt; &lt;BR /&gt;I provided by ioinit reboots on one node a now clusterwide consitent instance numbering scheme so that the instance Nos. for driver fcparray (whose HW paths connect the cluster shared PVs) and thus the controller Nos. of lock disk PVs, as they appear in the cmviewconf output above, are all the same.&lt;BR /&gt;  &lt;BR /&gt;on node jupiter:&lt;BR /&gt;  &lt;BR /&gt;[root@jupiter:/root]&lt;BR /&gt;# pvdisplay /dev/dsk/c7t0d0|grep PV\ Name                       &lt;BR /&gt;PV Name                     /dev/dsk/c7t0d0&lt;BR /&gt;PV Name                     /dev/dsk/c10t0d0    Alternate Link&lt;BR /&gt; &lt;BR /&gt; &lt;BR /&gt;On node saturn:&lt;BR /&gt; &lt;BR /&gt;[root@saturn:/root]&lt;BR /&gt;# vgchange -a r vgdat1 &amp;amp;&amp;amp; pvdisplay /dev/dsk/c7t0d0 &amp;amp;&amp;amp; vgchange -a n vgdat1&lt;BR /&gt;Activated volume group &lt;BR /&gt;Volume group "vgdat1" has been successfully changed.&lt;BR /&gt;--- Physical volumes ---&lt;BR /&gt;PV Name                     /dev/dsk/c7t0d0&lt;BR /&gt;PV Name                     /dev/dsk/c10t0d0    Alternate Link&lt;BR /&gt;VG Name                     /dev/vgdat1&lt;BR /&gt;PV Status                   available                &lt;BR /&gt;Allocatable                 yes          &lt;BR /&gt;VGDA                        2   &lt;BR /&gt;Cur LV                      1      &lt;BR /&gt;PE Size (Mbytes)            8               &lt;BR /&gt;Total PE                    880     &lt;BR /&gt;Free PE                     0       &lt;BR /&gt;Allocated PE                880         &lt;BR /&gt;Stale PE                    0       &lt;BR /&gt;IO Timeout (Seconds)        default             &lt;BR /&gt;Autoswitch                  On        &lt;BR /&gt;&lt;BR /&gt;Volume group "vgdat1" has been successfully changed.&lt;BR /&gt; &lt;BR /&gt;On node neptun:&lt;BR /&gt; &lt;BR /&gt;[root@neptun:/root]&lt;BR /&gt;# vgchange -a r vgdat1 &amp;amp;&amp;amp; pvdisplay /dev/dsk/c7t0d0 &amp;amp;&amp;amp; vgchange -a n vgdat1&lt;BR /&gt;Activated volume group &lt;BR /&gt;Volume group "vgdat1" has been successfully changed.&lt;BR /&gt;--- Physical volumes ---&lt;BR /&gt;PV Name                     /dev/dsk/c7t0d0&lt;BR /&gt;PV Name                     /dev/dsk/c10t0d0    Alternate Link&lt;BR /&gt;VG Name                     /dev/vgdat1&lt;BR /&gt;PV Status                   available                &lt;BR /&gt;Allocatable                 yes          &lt;BR /&gt;VGDA                        2   &lt;BR /&gt;Cur LV                      1      &lt;BR /&gt;PE Size (Mbytes)            8               &lt;BR /&gt;Total PE                    880     &lt;BR /&gt;Free PE                     0       &lt;BR /&gt;Allocated PE                880         &lt;BR /&gt;Stale PE                    0       &lt;BR /&gt;IO Timeout (Seconds)        default             &lt;BR /&gt;Autoswitch                  On        &lt;BR /&gt;&lt;BR /&gt;Volume group "vgdat1" has been successfully changed.&lt;BR /&gt; &lt;BR /&gt; &lt;BR /&gt;You see, the cluster lock PV should be accessable from all cluster nodes.&lt;BR /&gt; &lt;BR /&gt;What went wrong?&lt;BR /&gt; &lt;BR /&gt;Can I reesteblish a lock in a running cluster all will this require a cluster restart?&lt;BR /&gt; &lt;BR /&gt;Rgds.&lt;BR /&gt;Ralph&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Mon, 27 Jun 2005 03:36:11 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/howto-reestablish-cluster-lock-on-running-cluster/m-p/3571168#M701078</guid>
      <dc:creator>Ralph Grothe</dc:creator>
      <dc:date>2005-06-27T03:36:11Z</dc:date>
    </item>
    <item>
      <title>Re: Howto reestablish cluster lock on running cluster?</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/howto-reestablish-cluster-lock-on-running-cluster/m-p/3571169#M701079</link>
      <description>Ralph,&lt;BR /&gt;&lt;BR /&gt;you may try the attached binary which is typically used to re-initialized a failed quorum disk after replacement while the cluster remains up and running.&lt;BR /&gt;&lt;BR /&gt;Regards&lt;BR /&gt;Bernhard</description>
      <pubDate>Mon, 27 Jun 2005 04:15:34 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/howto-reestablish-cluster-lock-on-running-cluster/m-p/3571169#M701079</guid>
      <dc:creator>Bernhard Mueller</dc:creator>
      <dc:date>2005-06-27T04:15:34Z</dc:date>
    </item>
    <item>
      <title>Re: Howto reestablish cluster lock on running cluster?</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/howto-reestablish-cluster-lock-on-running-cluster/m-p/3571170#M701080</link>
      <description>the binary should be named cminitlock</description>
      <pubDate>Mon, 27 Jun 2005 04:16:30 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/howto-reestablish-cluster-lock-on-running-cluster/m-p/3571170#M701080</guid>
      <dc:creator>Bernhard Mueller</dc:creator>
      <dc:date>2005-06-27T04:16:30Z</dc:date>
    </item>
    <item>
      <title>Re: Howto reestablish cluster lock on running cluster?</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/howto-reestablish-cluster-lock-on-running-cluster/m-p/3571171#M701081</link>
      <description>usage: cminitlock [-v] [-t] vg_name pv_name&lt;BR /&gt;       -t Test the cluster lock only.&lt;BR /&gt;       -v Verbose output.&lt;BR /&gt;&lt;BR /&gt;       This command will initialize a cluster.&lt;BR /&gt;       lock disk and then query the disk to&lt;BR /&gt;       validate the disk was initialize&lt;BR /&gt;       successfully.  If the -t option is specified,&lt;BR /&gt;       the cluster lock is only queried.&lt;BR /&gt;&lt;BR /&gt;HTH&lt;BR /&gt;Regards,&lt;BR /&gt;Bernhard</description>
      <pubDate>Mon, 27 Jun 2005 04:41:43 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/howto-reestablish-cluster-lock-on-running-cluster/m-p/3571171#M701081</guid>
      <dc:creator>Bernhard Mueller</dc:creator>
      <dc:date>2005-06-27T04:41:43Z</dc:date>
    </item>
    <item>
      <title>Re: Howto reestablish cluster lock on running cluster?</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/howto-reestablish-cluster-lock-on-running-cluster/m-p/3571172#M701082</link>
      <description>Well I'm not certain exactly what was done and happened during your cluster, except one thing, and your cluster already told you that.  Your cluster lock disk is not being properly seen.&lt;BR /&gt;&lt;BR /&gt;There a few things that can/should only be addressed "properly" with the cluster down:&lt;BR /&gt;changing timing parms&lt;BR /&gt;changing the max amt of config pkgs&lt;BR /&gt;changing IP's &lt;BR /&gt;...and yes.....cluster lock&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;Options I'd recommend are: fix your lock disk -OR- get rid of the lock disk and set up a Quorum Server (even though you only have 3 node cluster, you can do this).  Quorum server is easy to install, you can download it from:  &lt;A href="http://docs.hp.com/en/ha.html#Quorum%20Server" target="_blank"&gt;http://docs.hp.com/en/ha.html#Quorum%20Server&lt;/A&gt;&lt;BR /&gt;&lt;BR /&gt;Just my thoughts,&lt;BR /&gt;Rita&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Mon, 27 Jun 2005 06:59:27 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/howto-reestablish-cluster-lock-on-running-cluster/m-p/3571172#M701082</guid>
      <dc:creator>Rita C Workman</dc:creator>
      <dc:date>2005-06-27T06:59:27Z</dc:date>
    </item>
    <item>
      <title>Re: Howto reestablish cluster lock on running cluster?</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/howto-reestablish-cluster-lock-on-running-cluster/m-p/3571173#M701083</link>
      <description>Hello Bernhard,&lt;BR /&gt; &lt;BR /&gt;many thanks for supplying me with the right tool.&lt;BR /&gt;Before I've received your reply I also filed a SW case at HP.&lt;BR /&gt;They also suggested the tool you mentioned.&lt;BR /&gt; &lt;BR /&gt;So I executed this&lt;BR /&gt; &lt;BR /&gt;[root@jupiter:/usr/local/sbin]&lt;BR /&gt;# ./cminitlock -v -t /dev/vgdat1 /dev/dsk/c7t0d0&lt;BR /&gt;Stating /dev/dsk/c7t0d0&lt;BR /&gt;Opening /dev/dsk/c7t0d0&lt;BR /&gt;-t flag specificed. Testing the cluster lock only.&lt;BR /&gt;Calling inquery lock IOCTL 3&lt;BR /&gt;Cluster lock inquiry request succeeded&lt;BR /&gt;Checking Cluster lock on /dev/dsk/c7t0d0&lt;BR /&gt;Calling query lock IOCTL 3&lt;BR /&gt;QUERY Cluster lock ioctl succeeded.&lt;BR /&gt;Cluster lock query operation failed, errno 2: No such file or directory&lt;BR /&gt;Cluster lock on disk /dev/dsk/c7t0d0 is missing!:No such file or directory&lt;BR /&gt;Cluster lock on /dev/dsk/c7t0d0 is not initialized.&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;[root@jupiter:/usr/local/sbin]&lt;BR /&gt;# ./cminitlock -v /dev/vgdat1 /dev/dsk/c7t0d0   &lt;BR /&gt;Stating /dev/dsk/c7t0d0&lt;BR /&gt;Opening /dev/dsk/c7t0d0&lt;BR /&gt;Initializing the cluster lock /dev/dsk/c7t0d0&lt;BR /&gt;Calling LVM_ASYNC_CLUSTER_LOCK&lt;BR /&gt;Checking Cluster lock on /dev/dsk/c7t0d0&lt;BR /&gt;Calling query lock IOCTL 3&lt;BR /&gt;QUERY Cluster lock ioctl succeeded.&lt;BR /&gt;Lock is not Owned.&lt;BR /&gt;Cluster lock is initialized.&lt;BR /&gt; &lt;BR /&gt; &lt;BR /&gt;And finally backed up lvmconf&lt;BR /&gt;  &lt;BR /&gt;  &lt;BR /&gt;[root@jupiter:/usr/local/sbin]&lt;BR /&gt;# vgcfgbackup /dev/vgdat1                                                               &lt;BR /&gt;Volume Group configuration for /dev/vgdat1 has been saved in /etc/lvmconf/vgdat1.conf&lt;BR /&gt;  &lt;BR /&gt;[root@jupiter:/usr/local/sbin]&lt;BR /&gt;# remsh saturn 'PATH=/usr/sbin; vgchange -a r vgdat1 &amp;amp;&amp;amp; vgcfgbackup vgdat1 &amp;amp;&amp;amp; vgchange -a n vgdat1'&lt;BR /&gt;Activated volume group &lt;BR /&gt;Volume group "vgdat1" has been successfully changed.&lt;BR /&gt;Volume Group configuration for /dev/vgdat1 has been saved in /etc/lvmconf/vgdat1.conf&lt;BR /&gt;Volume group "vgdat1" has been successfully changed.&lt;BR /&gt;   &lt;BR /&gt;[root@jupiter:/usr/local/sbin]&lt;BR /&gt;# remsh neptun 'PATH=/usr/sbin; vgchange -a r vgdat1 &amp;amp;&amp;amp; vgcfgbackup vgdat1 &amp;amp;&amp;amp; vgchange -a n vgdat1'&lt;BR /&gt;Activated volume group &lt;BR /&gt;Volume group "vgdat1" has been successfully changed.&lt;BR /&gt;Volume Group configuration for /dev/vgdat1 has been saved in /etc/lvmconf/vgdat1.conf&lt;BR /&gt;Volume group "vgdat1" has been successfully changed.&lt;BR /&gt; &lt;BR /&gt;  &lt;BR /&gt;I'm not sure if this already has done the trick.&lt;BR /&gt;</description>
      <pubDate>Mon, 27 Jun 2005 07:11:10 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/howto-reestablish-cluster-lock-on-running-cluster/m-p/3571173#M701083</guid>
      <dc:creator>Ralph Grothe</dc:creator>
      <dc:date>2005-06-27T07:11:10Z</dc:date>
    </item>
    <item>
      <title>Re: Howto reestablish cluster lock on running cluster?</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/howto-reestablish-cluster-lock-on-running-cluster/m-p/3571174#M701084</link>
      <description>Hi Rita,&lt;BR /&gt; &lt;BR /&gt;thank you for your suggestions.&lt;BR /&gt; &lt;BR /&gt;About the quorum server I'm not yet clear whether this isn't sort of contradicting.&lt;BR /&gt;To me it made no sense that if I used a quorum server as longs as itself weren't highly available, which would calls for yet another cluster or some kind of replication staggering just to be prepared to provide some tie breaker at every time this was required.&lt;BR /&gt;This sounds a bit like overkill unless you already have another production cluster that could share this task.&lt;BR /&gt;But probably I totally missed the notion of cluster stale mate tie braeking, and quorum servers.</description>
      <pubDate>Mon, 27 Jun 2005 07:21:01 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/howto-reestablish-cluster-lock-on-running-cluster/m-p/3571174#M701084</guid>
      <dc:creator>Ralph Grothe</dc:creator>
      <dc:date>2005-06-27T07:21:01Z</dc:date>
    </item>
    <item>
      <title>Re: Howto reestablish cluster lock on running cluster?</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/howto-reestablish-cluster-lock-on-running-cluster/m-p/3571175#M701085</link>
      <description>Forgot, Rita you mentioned I should as well  settle the amount of packages.&lt;BR /&gt;I've to admid that I left this value at 10 while the cluster currently only hosts 3 packages.&lt;BR /&gt;I thought thus I'd be prepared to online add further packages should ever the need arrise.&lt;BR /&gt;On the other hand I now consider the downside this may have on resource waste.&lt;BR /&gt;</description>
      <pubDate>Mon, 27 Jun 2005 07:26:15 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/howto-reestablish-cluster-lock-on-running-cluster/m-p/3571175#M701085</guid>
      <dc:creator>Ralph Grothe</dc:creator>
      <dc:date>2005-06-27T07:26:15Z</dc:date>
    </item>
    <item>
      <title>Re: Howto reestablish cluster lock on running cluster?</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/howto-reestablish-cluster-lock-on-running-cluster/m-p/3571176#M701086</link>
      <description>Ralph,&lt;BR /&gt;&lt;BR /&gt;to me this looks like you lock disk issue is fixed. If it is not you would get frequent messages in the syslog file (I think at least every 6 hours or so).&lt;BR /&gt;&lt;BR /&gt;Leave your # of packages at ten, this does not waste resource and, like you say, you may add packages on the fly. &lt;BR /&gt;&lt;BR /&gt;In most cases it is also considered safer to have a cluster lock disk. There are configurations which make a quorum server more reasonable but you need to take a very close look at network config and failure scenarios.&lt;BR /&gt;&lt;BR /&gt;Regards&lt;BR /&gt;Bernhard</description>
      <pubDate>Tue, 28 Jun 2005 03:25:04 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/howto-reestablish-cluster-lock-on-running-cluster/m-p/3571176#M701086</guid>
      <dc:creator>Bernhard Mueller</dc:creator>
      <dc:date>2005-06-28T03:25:04Z</dc:date>
    </item>
  </channel>
</rss>

