<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: service guard failover in Operating System - HP-UX</title>
    <link>https://community.hpe.com/t5/operating-system-hp-ux/service-guard-failover/m-p/2446405#M9462</link>
    <description>Hi:&lt;BR /&gt;&lt;BR /&gt;OK, from your latest reply we will assume that NO LVM changes have been made to either node (?) and that the cmclconfig file is the same on both nodes (?).&lt;BR /&gt;&lt;BR /&gt;Next, did the package actually halt correctly on the primary node?  Look at the /etc/cmcluster/&lt;PACKAGE&gt;/control.sh.log on the primary node to see if there were any problems deactivating the volume group.  If the volume group didn't deactivate then it can't be adopted by the other node.&lt;BR /&gt;&lt;BR /&gt;Also, please post the cmviewcl output.&lt;BR /&gt;&lt;BR /&gt;...JRF...&lt;BR /&gt;&lt;/PACKAGE&gt;</description>
    <pubDate>Tue, 19 Sep 2000 23:35:33 GMT</pubDate>
    <dc:creator>James R. Ferguson</dc:creator>
    <dc:date>2000-09-19T23:35:33Z</dc:date>
    <item>
      <title>service guard failover</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/service-guard-failover/m-p/2446399#M9456</link>
      <description>Hi &lt;BR /&gt;We run service guard(A.11.08) on 2 nodes, K220, running HP-UX 11.00.  We have Sybase database running on it.  The database was shutdown abnormally on the primary node and the secondary node did not take over.  In the log file on 2ndry node it is mentioned that one of the VG failed to activate.&lt;BR /&gt;When using vgdisplay command it give the message for a couple of PV,s&lt;BR /&gt;"Couldn?t query physical volume. The specified path does not correspond to the pV attached to this VG."&lt;BR /&gt;&lt;BR /&gt;pvdisplay on both disks give the same message.&lt;BR /&gt;"couldn't query physical volumes.  could not retrieve the names of pV,s belonging to VG.&lt;BR /&gt;&lt;BR /&gt;Any help I can get&lt;BR /&gt;is this the possible reason that package did not switch to secondary node&lt;BR /&gt;&lt;BR /&gt;Thanks&lt;BR /&gt;</description>
      <pubDate>Tue, 19 Sep 2000 17:04:14 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/service-guard-failover/m-p/2446399#M9456</guid>
      <dc:creator>Asad Malik</dc:creator>
      <dc:date>2000-09-19T17:04:14Z</dc:date>
    </item>
    <item>
      <title>Re: service guard failover</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/service-guard-failover/m-p/2446400#M9457</link>
      <description>Hi:&lt;BR /&gt;&lt;BR /&gt;The first thing I'd do it make sure that your cmclconfig file is current and the same on all nodes.&lt;BR /&gt;&lt;BR /&gt;...JRF...</description>
      <pubDate>Tue, 19 Sep 2000 17:19:45 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/service-guard-failover/m-p/2446400#M9457</guid>
      <dc:creator>James R. Ferguson</dc:creator>
      <dc:date>2000-09-19T17:19:45Z</dc:date>
    </item>
    <item>
      <title>Re: service guard failover</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/service-guard-failover/m-p/2446401#M9458</link>
      <description>It sounds as though this VG is not correctly configured on the second node.&lt;BR /&gt;&lt;BR /&gt;The easiest way to fix it is to remove it from that system with 'vgexport' and reimport it with 'vgimport'.&lt;BR /&gt;&lt;BR /&gt;If you need any help with the specifics for doing this please repost.&lt;BR /&gt;&lt;BR /&gt;Regards,&lt;BR /&gt;John</description>
      <pubDate>Tue, 19 Sep 2000 17:50:16 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/service-guard-failover/m-p/2446401#M9458</guid>
      <dc:creator>John Palmer</dc:creator>
      <dc:date>2000-09-19T17:50:16Z</dc:date>
    </item>
    <item>
      <title>Re: service guard failover</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/service-guard-failover/m-p/2446402#M9459</link>
      <description>My question would be did this failover properly before?  If it did...has anything changed on the volume group on the first node?&lt;BR /&gt;To explain:&lt;BR /&gt;If you added disk or made chgs to the volume group on the first node - &lt;BR /&gt;did you remember that you must then do vgexport -pvs -m /etc/lvmconf/vg.map volgrp and then rcp that file over to the second node.  &lt;BR /&gt;Then on the second node you would have needed to completely remove the /dev/volgrp/group and all files under the /dev/volgroup and then recreate it with the mknod....Now you need to import the information back into this node by doing the vgimport -vs -m /etc/lvmconf/vg.map volgrp and this will fix your /etc/lvmtab  (so all the drives come up this second node) and put your /dev/volgrp/files out there too.&lt;BR /&gt;And if you added any new logical volumes or new filesystems, so that on your first node you changed your package configuration file, you must also change that information on the other node as well. &lt;BR /&gt;And one last note: I usually take my packages down on all affected nodes, while I'm working ... so I would have done a cmhaltpkg, then done a vgchange -c n /dev/volgrp, then done a vgchange -a y /dev/volgrp before I started any changes....then when I was all done and ready to put everything back I would have reversed these three steps (vgchange -a n /dev/volgrp, then vgchange -c y /dev/volgrp, then cmrunpkg...)&lt;BR /&gt;&lt;BR /&gt;My guess is that a change was done to the primary node, but the secondary node did not get the vgimport mapfile...so your /etc/lvmtab is not current.  I'd run strings on /etc/lvmtab on the second node to check this.  &lt;BR /&gt;&lt;BR /&gt;Just a thought,</description>
      <pubDate>Tue, 19 Sep 2000 18:05:56 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/service-guard-failover/m-p/2446402#M9459</guid>
      <dc:creator>Rita C Workman</dc:creator>
      <dc:date>2000-09-19T18:05:56Z</dc:date>
    </item>
    <item>
      <title>Re: service guard failover</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/service-guard-failover/m-p/2446403#M9460</link>
      <description>what does a :&lt;BR /&gt;strings /etc/lvmtab &lt;BR /&gt;on each node show?&lt;BR /&gt;has this ever worked? or been tested lately?&lt;BR /&gt;if the PV's do not match what the system knows about, then yes the VG will fail to activate.&lt;BR /&gt;Another question is, what does ioscan show?&lt;BR /&gt;do you maybe have a hardware error, in that a path to the discs is down?</description>
      <pubDate>Tue, 19 Sep 2000 18:23:02 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/service-guard-failover/m-p/2446403#M9460</guid>
      <dc:creator>melvyn burnard</dc:creator>
      <dc:date>2000-09-19T18:23:02Z</dc:date>
    </item>
    <item>
      <title>Re: service guard failover</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/service-guard-failover/m-p/2446404#M9461</link>
      <description>on both nodes, strings /etc/lvmtab &lt;BR /&gt;shows the same # of VG's and the respective PV's for VG's.  ioscan displays the disk and are in claimed state.&lt;BR /&gt;ha seen it worked before.</description>
      <pubDate>Tue, 19 Sep 2000 21:07:11 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/service-guard-failover/m-p/2446404#M9461</guid>
      <dc:creator>Asad Malik</dc:creator>
      <dc:date>2000-09-19T21:07:11Z</dc:date>
    </item>
    <item>
      <title>Re: service guard failover</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/service-guard-failover/m-p/2446405#M9462</link>
      <description>Hi:&lt;BR /&gt;&lt;BR /&gt;OK, from your latest reply we will assume that NO LVM changes have been made to either node (?) and that the cmclconfig file is the same on both nodes (?).&lt;BR /&gt;&lt;BR /&gt;Next, did the package actually halt correctly on the primary node?  Look at the /etc/cmcluster/&lt;PACKAGE&gt;/control.sh.log on the primary node to see if there were any problems deactivating the volume group.  If the volume group didn't deactivate then it can't be adopted by the other node.&lt;BR /&gt;&lt;BR /&gt;Also, please post the cmviewcl output.&lt;BR /&gt;&lt;BR /&gt;...JRF...&lt;BR /&gt;&lt;/PACKAGE&gt;</description>
      <pubDate>Tue, 19 Sep 2000 23:35:33 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/service-guard-failover/m-p/2446405#M9462</guid>
      <dc:creator>James R. Ferguson</dc:creator>
      <dc:date>2000-09-19T23:35:33Z</dc:date>
    </item>
    <item>
      <title>Re: service guard failover</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/service-guard-failover/m-p/2446406#M9463</link>
      <description>If the package failed to halt properly on the primary server, it's VG would not be activatable (or displayable) on the secondary server.  Check the primary package control log file to see just what happened.  The package control log registers the package startup and shutdown messages.   If the packages' DB failed abnormally, it's possible that the package may not have shutdown all the way due to open data files.   Always review the pkg log files and syslog.log for clues regarding package adoption problems.</description>
      <pubDate>Wed, 20 Sep 2000 20:28:31 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/service-guard-failover/m-p/2446406#M9463</guid>
      <dc:creator>Stephen Doud</dc:creator>
      <dc:date>2000-09-20T20:28:31Z</dc:date>
    </item>
    <item>
      <title>Re: service guard failover</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/service-guard-failover/m-p/2446407#M9464</link>
      <description>Hi&lt;BR /&gt;cmclconfig on both nodes is same.&lt;BR /&gt;package halt correctly on primary node, all 3 VG were deactivated successfully.  On secondary node 2/3 were activated successfully.  one failed.&lt;BR /&gt;output of cmviewcl is as under&lt;BR /&gt;&lt;BR /&gt;CLUSTER      STATUS       &lt;BR /&gt;tang_cl1    up           &lt;BR /&gt;&lt;BR /&gt;  NODE         STATUS       STATE        &lt;BR /&gt;  tang1         up           running      &lt;BR /&gt;&lt;BR /&gt;    Network_Parameters:&lt;BR /&gt;    INTERFACE    STATUS       PATH         NAME         &lt;BR /&gt;    PRIMARY      up           10/4/8       lan1         &lt;BR /&gt;    PRIMARY      up           10/12/6      lan0         &lt;BR /&gt;    STANDBY      up           10/4/16      lan2         &lt;BR /&gt;&lt;BR /&gt;    PACKAGE      STATUS       STATE        PKG_SWITCH   NODE         &lt;BR /&gt;    sybase_pkg   up           running      enabled      tang1       &lt;BR /&gt;&lt;BR /&gt;      Policy_Parameters:&lt;BR /&gt;      POLICY_NAME     CONFIGURED_VALUE&lt;BR /&gt;      Failover        configured_node&lt;BR /&gt;      Failback        manual&lt;BR /&gt;&lt;BR /&gt;      Script_Parameters:&lt;BR /&gt;      ITEM       STATUS   MAX_RESTARTS  RESTARTS   NAME&lt;BR /&gt;      Service    up          Unlimited         0   sybase0 &lt;BR /&gt;      Subnet     up                                10.14.0.0 &lt;BR /&gt;&lt;BR /&gt;      Node_Switching_Parameters:&lt;BR /&gt;      NODE_TYPE    STATUS       SWITCHING    NAME                      &lt;BR /&gt;      Primary      up           enabled      tang1       (current)    &lt;BR /&gt;      Alternate    up           enabled      tang2                    &lt;BR /&gt;&lt;BR /&gt;  NODE         STATUS       STATE        &lt;BR /&gt;  tang2       up           running      &lt;BR /&gt;&lt;BR /&gt;    Network_Parameters:&lt;BR /&gt;    INTERFACE    STATUS       PATH         NAME         &lt;BR /&gt;    PRIMARY      up           10/4/8       lan0         &lt;BR /&gt;    STANDBY      up           10/4/16      lan1         &lt;BR /&gt;    PRIMARY      up           10/12/6      lan2         &lt;BR /&gt;</description>
      <pubDate>Thu, 21 Sep 2000 20:24:28 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/service-guard-failover/m-p/2446407#M9464</guid>
      <dc:creator>Asad Malik</dc:creator>
      <dc:date>2000-09-21T20:24:28Z</dc:date>
    </item>
    <item>
      <title>Re: service guard failover</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/service-guard-failover/m-p/2446408#M9465</link>
      <description>Hi Asad:&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Thu, 21 Sep 2000 22:53:30 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/service-guard-failover/m-p/2446408#M9465</guid>
      <dc:creator>James R. Ferguson</dc:creator>
      <dc:date>2000-09-21T22:53:30Z</dc:date>
    </item>
    <item>
      <title>Re: service guard failover</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/service-guard-failover/m-p/2446409#M9466</link>
      <description>Hi Asad:&lt;BR /&gt;&lt;BR /&gt;On tbe secondary node, on which the package failed to start, would you please post the /etc/cmcluster/&lt;PACKAGE&gt;/control.sh.log &lt;BR /&gt;Thanks.&lt;BR /&gt;&lt;BR /&gt;...JRF...&lt;BR /&gt;&lt;/PACKAGE&gt;</description>
      <pubDate>Thu, 21 Sep 2000 22:55:39 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/service-guard-failover/m-p/2446409#M9466</guid>
      <dc:creator>James R. Ferguson</dc:creator>
      <dc:date>2000-09-21T22:55:39Z</dc:date>
    </item>
    <item>
      <title>Re: service guard failover</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/service-guard-failover/m-p/2446410#M9467</link>
      <description>Hi &lt;BR /&gt;output of cntl.log is attached</description>
      <pubDate>Fri, 22 Sep 2000 12:41:29 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/service-guard-failover/m-p/2446410#M9467</guid>
      <dc:creator>Asad Malik</dc:creator>
      <dc:date>2000-09-22T12:41:29Z</dc:date>
    </item>
    <item>
      <title>Re: service guard failover</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/service-guard-failover/m-p/2446411#M9468</link>
      <description>Asad,&lt;BR /&gt;&lt;BR /&gt;Please do the following on you backup node:-&lt;BR /&gt;&lt;BR /&gt;ll /dev/vg*/group&lt;BR /&gt;&lt;BR /&gt;and check that each volume group has a unique minor number.&lt;BR /&gt;&lt;BR /&gt;For example:-&lt;BR /&gt;&lt;BR /&gt;crw-r-----   1 root       sys         64 0x000000 Jul  6 08:17 /dev/vg00/group&lt;BR /&gt;crw-r-----   1 root       dba         64 0x010000 Sep 12 10:38 /dev/vg01/group&lt;BR /&gt;crw-r-----   1 root       dba         64 0x030000 Sep 21 11:04 /dev/vg03/group&lt;BR /&gt;&lt;BR /&gt;the above groups are 0x00, 0x01 and 0x03.&lt;BR /&gt;&lt;BR /&gt;Your problem could be due to having configured two groups with the same number.&lt;BR /&gt;&lt;BR /&gt;If your vg_c13 has the same minor number then you will have to vgexport it to remove it then repeat your original vgimport process but use a unique value in your 'mknod group....' command.&lt;BR /&gt;&lt;BR /&gt;Hope this helps,&lt;BR /&gt;John</description>
      <pubDate>Fri, 22 Sep 2000 12:55:36 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/service-guard-failover/m-p/2446411#M9468</guid>
      <dc:creator>John Palmer</dc:creator>
      <dc:date>2000-09-22T12:55:36Z</dc:date>
    </item>
    <item>
      <title>Re: service guard failover</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/service-guard-failover/m-p/2446412#M9469</link>
      <description>Hi&lt;BR /&gt;you are absolutely right.  the minor # is same as of another VG.  output of &lt;BR /&gt;ll /dev/vg*/group is displayed for both primary and secondary nodes&lt;BR /&gt;&lt;BR /&gt;primary node &lt;BR /&gt;crw-r--r--   1 root       sys         64 0x000000 Jul 10  1997 /dev/vg00/group&lt;BR /&gt;crw-r--r--   1 root       sys         64 0x020000 May 26  1998 /dev/vg01/group&lt;BR /&gt;crw-r-----   1 sybase     sybdba      64 0x030000 Oct 23  1999 /dev/vg_cl1/group&lt;BR /&gt;crw-r-----   1 sybase     sybdba      64 0x040000 Oct 23  1999 /dev/vg_cl2/group&lt;BR /&gt;crw-r-----   1 sybase     sybdba      64 0x050000 Oct 23  1999 /dev/vg_cl3/group&lt;BR /&gt;crw-rw-rw-   1 root       sys         64 0x010000 Aug 26  1997 /dev/vg_sybase/group&lt;BR /&gt;&lt;BR /&gt;output of bdf on primary node&lt;BR /&gt;Filesystem          kbytes    used   avail %used Mounted on&lt;BR /&gt;/dev/vg00/lvol3     103413   71613   21458   77% /&lt;BR /&gt;/dev/vg00/lvol1      47829   28732   14314   67% /stand&lt;BR /&gt;/dev/vg00/lvol8     598357  304093  234428   56% /var&lt;BR /&gt;/dev/vg00/lvol7     646229  489279   92327   84% /usr&lt;BR /&gt;/dev/vg01/lv_syb   4190208 2645219 1448746   65% /u1&lt;BR /&gt;/dev/vg00/lvol6     299157   80361  188880   30% /tmp&lt;BR /&gt;/dev/vg00/lvol5     498645  424490   24290   95% /opt&lt;BR /&gt;/dev/vg00/lvol4      19861   14388    3486   80% /home&lt;BR /&gt;/dev/vg_cl1/lv_syb 4190208 2992703 1122686   73% /u1_cl&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;secondary node where the failover fail&lt;BR /&gt;&lt;BR /&gt;crw-r-----   1 root       sys         64 0x000000 May  7  1998 /dev/vg00/group&lt;BR /&gt;crw-rw-rw-   1 root       sys         64 0x050000 Aug 24  1998 /dev/vg01/group&lt;BR /&gt;crw-rw-rw-   1 root       sys         64 0x030000 Jun  2  1998 /dev/vg_cl1/group&lt;BR /&gt;crw-rw-rw-   1 root       sys         64 0x040000 Jun  2  1998 /dev/vg_cl2/group&lt;BR /&gt;crw-rw-rw-   1 root       sys         64 0x050000 May 27  1998 /dev/vg_cl3/group&lt;BR /&gt;&lt;BR /&gt;output of bdf on secondary node&lt;BR /&gt;Filesystem          kbytes    used   avail %used Mounted on&lt;BR /&gt;/dev/root           115605   43303   60741   42% /&lt;BR /&gt;/dev/vg00/lvol1      47829   28042   15004   65% /stand&lt;BR /&gt;/dev/vg00/lvol8     626413  269849  293922   48% /var&lt;BR /&gt;/dev/vg00/lvol7     650261  465757  119477   80% /usr&lt;BR /&gt;/dev/vg00/lvol13    299157   17930  251311    7% /tmp&lt;BR /&gt;/dev/vg00/lvol6     749973  526873  148102   78% /opt&lt;BR /&gt;/dev/vg00/lvol5      19861   10536    7338   59% /home&lt;BR /&gt;/dev/vg01/lvol1    2048000 1655795  367853   82% /u2&lt;BR /&gt;&lt;BR /&gt;/dev/vg01 has been activated and filesystem has been mounted on secondary node recently.&lt;BR /&gt;&lt;BR /&gt;as i am not very strong in LVM, I shall greatly appreciate if a step by step procedure can be provided on how should i proceed from here and on which node.  &lt;BR /&gt;&lt;BR /&gt;Thanks a lot</description>
      <pubDate>Fri, 22 Sep 2000 13:19:33 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/service-guard-failover/m-p/2446412#M9469</guid>
      <dc:creator>Asad Malik</dc:creator>
      <dc:date>2000-09-22T13:19:33Z</dc:date>
    </item>
    <item>
      <title>Re: service guard failover</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/service-guard-failover/m-p/2446413#M9470</link>
      <description>Asad,&lt;BR /&gt;&lt;BR /&gt;Your problem on the second node appears to have been caused by vg01 having been created with minor number 05 since you last failed over.&lt;BR /&gt;&lt;BR /&gt;It is easily solved however. Proceed as follows on the secondary node:-&lt;BR /&gt;&lt;BR /&gt;vgexport vg_cl3&lt;BR /&gt;&lt;BR /&gt;This will remove vg_cl3 from the system.&lt;BR /&gt;&lt;BR /&gt;On the primary node do:-&lt;BR /&gt;vgexport -p -v -s -m /tmp/map vg_cl3&lt;BR /&gt;then copy the map file '/tmp/map' to your secondary node with rcp or ftp.&lt;BR /&gt;&lt;BR /&gt;On the secondary node do:&lt;BR /&gt;mkdir /dev/vg_cl3&lt;BR /&gt;mknod /dev/vg_cl3/group c 64 0x060000&lt;BR /&gt;vgimport -m /tmp/map -s -v vg_cl3&lt;BR /&gt;&lt;BR /&gt;That's it - you will be able to do this with the cluster running, I tested this earlier in the week.&lt;BR /&gt;&lt;BR /&gt;Regards,&lt;BR /&gt;John&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Fri, 22 Sep 2000 13:31:00 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/service-guard-failover/m-p/2446413#M9470</guid>
      <dc:creator>John Palmer</dc:creator>
      <dc:date>2000-09-22T13:31:00Z</dc:date>
    </item>
    <item>
      <title>Re: service guard failover</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/service-guard-failover/m-p/2446414#M9471</link>
      <description>Hi&lt;BR /&gt;Is this can be a alternate solution&lt;BR /&gt;&lt;BR /&gt;"On secondary node /dev/vg01 has been activated recently and it is not part of the cluster.  If filesystem is unmounted and vg01 is deactivated, can vgexport and vgimport be applied to this vg on the secondary node only? like&lt;BR /&gt;&lt;BR /&gt;on secondary node&lt;BR /&gt;unmount /u2&lt;BR /&gt;deactivate /dev/vg01&lt;BR /&gt;vgexport -m /tmp/mapfile /dev/vg01&lt;BR /&gt;recreate /dev/vg01 with minor number other than /dev/vg_cl3&lt;BR /&gt;vgimport -m mapfile /dev/vg01&lt;BR /&gt;&lt;BR /&gt;and now we have different minor numbers for these two VGs&lt;BR /&gt;&lt;BR /&gt;just checking whether this procedure can be applied in this situation.&lt;BR /&gt;Thanks&lt;BR /&gt;</description>
      <pubDate>Fri, 22 Sep 2000 14:41:04 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/service-guard-failover/m-p/2446414#M9471</guid>
      <dc:creator>Asad Malik</dc:creator>
      <dc:date>2000-09-22T14:41:04Z</dc:date>
    </item>
    <item>
      <title>Re: service guard failover</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/service-guard-failover/m-p/2446415#M9472</link>
      <description>Yes, this will work as well. It's just that reimporting vg_cl3 didn't require you to unmount any filesystems etc.</description>
      <pubDate>Fri, 22 Sep 2000 14:45:48 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/service-guard-failover/m-p/2446415#M9472</guid>
      <dc:creator>John Palmer</dc:creator>
      <dc:date>2000-09-22T14:45:48Z</dc:date>
    </item>
    <item>
      <title>Re: service guard failover</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/service-guard-failover/m-p/2446416#M9473</link>
      <description>Hi &lt;BR /&gt;I shall try the fix coming weekend.&lt;BR /&gt;but a question&lt;BR /&gt;how 2 VG's will end up with same minor number.  is it because 1 vg was deactivated and the new one was created or is there something else.&lt;BR /&gt;&lt;BR /&gt;Thanks</description>
      <pubDate>Wed, 27 Sep 2000 10:53:41 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/service-guard-failover/m-p/2446416#M9473</guid>
      <dc:creator>Asad Malik</dc:creator>
      <dc:date>2000-09-27T10:53:41Z</dc:date>
    </item>
    <item>
      <title>Re: service guard failover</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/service-guard-failover/m-p/2446417#M9474</link>
      <description>How did you create vg01? &lt;BR /&gt;&lt;BR /&gt;If manually, then someone didn't check for unique group id's (ll /dev/vg*/group).&lt;BR /&gt;&lt;BR /&gt;If you go for the solution I posted above - reimporting vg_cl3 rather than vgo1 then you can do it now with no downtime. No need to wait until the weekend.</description>
      <pubDate>Wed, 27 Sep 2000 11:00:15 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/service-guard-failover/m-p/2446417#M9474</guid>
      <dc:creator>John Palmer</dc:creator>
      <dc:date>2000-09-27T11:00:15Z</dc:date>
    </item>
    <item>
      <title>Re: service guard failover</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/service-guard-failover/m-p/2446418#M9475</link>
      <description>crw-r----- 1 root sys 64 0x000000 May 7 1998 /dev/vg00/group &lt;BR /&gt;crw-rw-rw- 1 root sys 64 0x050000 Aug 24 1998 /dev/vg01/group &lt;BR /&gt;crw-rw-rw- 1 root sys 64 0x030000 Jun 2 1998 /dev/vg_cl1/group &lt;BR /&gt;crw-rw-rw- 1 root sys 64 0x040000 Jun 2 1998 /dev/vg_cl2/group &lt;BR /&gt;crw-rw-rw- 1 root sys 64 0x050000 May 27 1998 /dev/vg_cl3/group &lt;BR /&gt;&lt;BR /&gt;hi&lt;BR /&gt;this is the output of ll /dev/vg*/group. form the secondary server.&lt;BR /&gt;and the dates shown are from 1998 for the two VG in question.  the package did failover a few times in 1999 and in 2000 successfully.  if the minor number was same at that time did the recent activation of vg01 has contributed to this failover failure. &lt;BR /&gt;I shall do it on weekend because of customer's wishes.</description>
      <pubDate>Wed, 27 Sep 2000 11:29:53 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/service-guard-failover/m-p/2446418#M9475</guid>
      <dc:creator>Asad Malik</dc:creator>
      <dc:date>2000-09-27T11:29:53Z</dc:date>
    </item>
  </channel>
</rss>

