<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: serviceguard failover problem in Operating System - HP-UX</title>
    <link>https://community.hpe.com/t5/operating-system-hp-ux/serviceguard-failover-problem/m-p/3494071#M700352</link>
    <description>Ok Bharat, what is strange is that this cluster has been built and running for 5 years, so no changes should have taken place. However it increasing looks like a rebuild job.  &lt;BR /&gt;&lt;BR /&gt;what is strange is that both servers share the same device id to this disk is that right? &lt;BR /&gt;&lt;BR /&gt;Also when the backup server couldn't see the array in the main data centre (and should therefore have started the package from it's local array) I couldn't do a diskinfo on the disk c9t0d0. however once the link was re-established I could do a diskinfo from the backup server to that disk device!</description>
    <pubDate>Mon, 28 Feb 2005 05:23:29 GMT</pubDate>
    <dc:creator>CGEYROTH</dc:creator>
    <dc:date>2005-02-28T05:23:29Z</dc:date>
    <item>
      <title>serviceguard failover problem</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/serviceguard-failover-problem/m-p/3494065#M700346</link>
      <description>We have 2 x L2000 (rp5450) servers in a serviceguard configuration. This is setup as a campus cluster with 1 node + fc60 controller + Sc10 in each data centre. The cluster is setup with 2 cluster lock disks, 1 in each array. &lt;BR /&gt;&lt;BR /&gt;We had a failure on the production server which caused a failover to the node/array in the second centre, however the node in the second centre hung on starting the package, the following errors where in the package control log when it tried to activate the first of 9 serviceguarded volume groups:- &lt;BR /&gt;&lt;BR /&gt;Feb 27 10:02:55 - "hostname": Activating volume group vg01 with exclusive option &lt;BR /&gt;. &lt;BR /&gt;vgchange: Warning: Couldn't attach to the volume group physical volume "/dev/dsk &lt;BR /&gt;/c9t0d0": &lt;BR /&gt;The path of the physical volume refers to a device that does not &lt;BR /&gt;exist, or is not configured into the kernel.</description>
      <pubDate>Mon, 28 Feb 2005 03:34:45 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/serviceguard-failover-problem/m-p/3494065#M700346</guid>
      <dc:creator>CGEYROTH</dc:creator>
      <dc:date>2005-02-28T03:34:45Z</dc:date>
    </item>
    <item>
      <title>Re: serviceguard failover problem</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/serviceguard-failover-problem/m-p/3494066#M700347</link>
      <description>looks like you need to check the LVM config and disk availability on the second node</description>
      <pubDate>Mon, 28 Feb 2005 03:49:39 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/serviceguard-failover-problem/m-p/3494066#M700347</guid>
      <dc:creator>melvyn burnard</dc:creator>
      <dc:date>2005-02-28T03:49:39Z</dc:date>
    </item>
    <item>
      <title>Re: serviceguard failover problem</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/serviceguard-failover-problem/m-p/3494067#M700348</link>
      <description>&lt;BR /&gt;use vgcfgrestore in second node.&lt;BR /&gt;&lt;BR /&gt;also check for loose connection of the disk (use ioscan -fnC disk)</description>
      <pubDate>Mon, 28 Feb 2005 03:54:31 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/serviceguard-failover-problem/m-p/3494067#M700348</guid>
      <dc:creator>Ravi_8</dc:creator>
      <dc:date>2005-02-28T03:54:31Z</dc:date>
    </item>
    <item>
      <title>Re: serviceguard failover problem</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/serviceguard-failover-problem/m-p/3494068#M700349</link>
      <description>Hi&lt;BR /&gt;&lt;BR /&gt;Is there any way that you can restore the cluster to the working node and re-vgexport ALL vgs?  You can then re-import the LVM configuration to the failover node.&lt;BR /&gt;&lt;BR /&gt;Keith</description>
      <pubDate>Mon, 28 Feb 2005 03:54:56 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/serviceguard-failover-problem/m-p/3494068#M700349</guid>
      <dc:creator>Keith Bryson</dc:creator>
      <dc:date>2005-02-28T03:54:56Z</dc:date>
    </item>
    <item>
      <title>Re: serviceguard failover problem</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/serviceguard-failover-problem/m-p/3494069#M700350</link>
      <description>melvyn,&lt;BR /&gt;&lt;BR /&gt;When you say check the LVM config what do you mean? lvm conf files or something else. &lt;BR /&gt;When you say check the disk availability do you mean ioscan's, diskinfo? I'm not sure I can do diskinfo on the disk in question unless the disks are activated on that node.&lt;BR /&gt;&lt;BR /&gt;i did run a script written by dietmar konermann that uses the VGDA to identify the device names from each node and this is what it shows:-&lt;BR /&gt;&lt;BR /&gt;***** LVM-VG: 0161901557-0965999312&lt;BR /&gt;2    backup:c8t0d0    0161901557-0971456414 0/2/0/0.8.0.4.0.0.0 HP/A5277A (0x0&lt;BR /&gt;1/vg01/0161901557-0965999312)&lt;BR /&gt;     backup:c9t0d0    0161901557-0971456414 0/6/0/0.8.0.5.0.0.0 HP/A5277A (0x0&lt;BR /&gt;1/vg01/0161901557-0965999312)&lt;BR /&gt;     prod:c6t0d0    0161901557-0971456414 0/2/0/0.8.0.4.0.0.0 HP/A5277A (0x0&lt;BR /&gt;1/vg01/0161901557-0965999312)&lt;BR /&gt;     prod:c9t0d0    0161901557-0971456414 0/6/0/0.8.0.5.0.0.0 HP/A5277A (0x0&lt;BR /&gt;1/vg01/0161901557-0965999312)&lt;BR /&gt;&lt;BR /&gt;as you can see both prod and backup share the same device id for one of the routes to the disk, is that normal?</description>
      <pubDate>Mon, 28 Feb 2005 04:06:05 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/serviceguard-failover-problem/m-p/3494069#M700350</guid>
      <dc:creator>CGEYROTH</dc:creator>
      <dc:date>2005-02-28T04:06:05Z</dc:date>
    </item>
    <item>
      <title>Re: serviceguard failover problem</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/serviceguard-failover-problem/m-p/3494070#M700351</link>
      <description>Hi,&lt;BR /&gt;LVM configuration issue mean ... VG configuration is not properly replicated over the two node.&lt;BR /&gt;&lt;BR /&gt;Normally what we do is &lt;BR /&gt;&lt;BR /&gt;On node 1:&lt;BR /&gt;1. We activate the VG and create one map file using "vgexport -s -v -p -m mapfilename vgname" command&lt;BR /&gt;&lt;BR /&gt;2. Identify the PV used by VG on node one and confirm for e.g. PV1 and PV2 that seen from node 2 belong to that VG.&lt;BR /&gt;&lt;BR /&gt;3. On node 2 &lt;BR /&gt;# mkdir /dev/vgxx&lt;BR /&gt;# mknod /dev/vgxx/group c 64 &lt;MINORNO.&gt;&lt;BR /&gt;&lt;BR /&gt;4. Copy this mapfile to node 2. Then do vgimport "vgimport -s -v -m mapfile vgxx PV1 PV2"&lt;BR /&gt;&lt;BR /&gt;5. Then deactivate VG on node 1 and you can activate it on node 2. &lt;BR /&gt;&lt;BR /&gt;So the VG won't activate if the mapfile is not correct. It basically copies the entire VG structure from node 1 to node 2.&lt;BR /&gt;&lt;BR /&gt;See man vgexport and vgimport.&lt;BR /&gt;&lt;BR /&gt;Hope that helps.&lt;BR /&gt;Regards,&lt;/MINORNO.&gt;</description>
      <pubDate>Mon, 28 Feb 2005 05:10:04 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/serviceguard-failover-problem/m-p/3494070#M700351</guid>
      <dc:creator>Bharat Katkar</dc:creator>
      <dc:date>2005-02-28T05:10:04Z</dc:date>
    </item>
    <item>
      <title>Re: serviceguard failover problem</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/serviceguard-failover-problem/m-p/3494071#M700352</link>
      <description>Ok Bharat, what is strange is that this cluster has been built and running for 5 years, so no changes should have taken place. However it increasing looks like a rebuild job.  &lt;BR /&gt;&lt;BR /&gt;what is strange is that both servers share the same device id to this disk is that right? &lt;BR /&gt;&lt;BR /&gt;Also when the backup server couldn't see the array in the main data centre (and should therefore have started the package from it's local array) I couldn't do a diskinfo on the disk c9t0d0. however once the link was re-established I could do a diskinfo from the backup server to that disk device!</description>
      <pubDate>Mon, 28 Feb 2005 05:23:29 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/serviceguard-failover-problem/m-p/3494071#M700352</guid>
      <dc:creator>CGEYROTH</dc:creator>
      <dc:date>2005-02-28T05:23:29Z</dc:date>
    </item>
    <item>
      <title>Re: serviceguard failover problem</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/serviceguard-failover-problem/m-p/3494072#M700353</link>
      <description>My advice is to check and confirm which disks are in use for each VG, then confirm your lvmtab file, compare the two nodes etc.&lt;BR /&gt;Do ioscan to see what hardware is showing as NO_HW, etc.&lt;BR /&gt;Also check that your package scripts start the VG's in exclusive mode with no quorum as in the layout you have you will not meet vg quorum requirements if conatct is lost with the other side.&lt;BR /&gt;Also confirm where this disc lies, i.e. is it local or remote to the node in question, check your syslogs for any other data.&lt;BR /&gt;</description>
      <pubDate>Mon, 28 Feb 2005 05:30:54 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/serviceguard-failover-problem/m-p/3494072#M700353</guid>
      <dc:creator>melvyn burnard</dc:creator>
      <dc:date>2005-02-28T05:30:54Z</dc:date>
    </item>
  </channel>
</rss>

