<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: Major VG problems after MC/SG cluster crashed hard in Operating System - Linux</title>
    <link>https://community.hpe.com/t5/operating-system-linux/major-vg-problems-after-mc-sg-cluster-crashed-hard/m-p/5150689#M56533</link>
    <description>Thanks for looking.</description>
    <pubDate>Fri, 16 Jan 2009 01:55:20 GMT</pubDate>
    <dc:creator>John Rayl</dc:creator>
    <dc:date>2009-01-16T01:55:20Z</dc:date>
    <item>
      <title>Major VG problems after MC/SG cluster crashed hard</title>
      <link>https://community.hpe.com/t5/operating-system-linux/major-vg-problems-after-mc-sg-cluster-crashed-hard/m-p/5150682#M56526</link>
      <description>&lt;!--!*#--&gt;Storage is EVA 6000&lt;BR /&gt;Type:   HSV200 &lt;BR /&gt;Version: 6100 &lt;BR /&gt;Software: CR0EB0xc3p-6100 &lt;BR /&gt;&lt;BR /&gt;Two node SGLX cluster RHEL AS 4 SG 11.18. Proliant GL380 G5 2 each 4 GB Qlogic FC cards.&lt;BR /&gt;&lt;BR /&gt;Several 1TB LUNs, three 500 GB.&lt;BR /&gt;&lt;BR /&gt;A day ago, the first node had a hardisk failure (3 drive RAID) that caused the machine to barely boot, with no way to login.&lt;BR /&gt;&lt;BR /&gt;NFS/SMB Fileserver package moved to node two no problems.&lt;BR /&gt;&lt;BR /&gt;While working on getting node one back up, node two crashed.&lt;BR /&gt;&lt;BR /&gt;Of course SGLX cannot start as I did not have node one back up to form a cluster.&lt;BR /&gt;&lt;BR /&gt;For a sanity check I wanted to just activate the VGs, mount the lvols and give the data a once over.&lt;BR /&gt;&lt;BR /&gt;No such luck. No VGs to activate.&lt;BR /&gt;&lt;BR /&gt;pvscan -d output:&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;  Incorrect metadata area header checksum&lt;BR /&gt;  Incorrect metadata area header checksum&lt;BR /&gt;  Incorrect metadata area header checksum&lt;BR /&gt;  PV /dev/cciss/c0d0p2   VG VolGroup00   lvm2 [136.56 GB / 96.00 MB free]&lt;BR /&gt;  PV /dev/sda1     lvm2 [1023.80 MB]&lt;BR /&gt;  PV /dev/sdb1     lvm2 [999.99 GB]&lt;BR /&gt;  PV /dev/sdc1     lvm2 [900.00 GB]&lt;BR /&gt;  PV /dev/sdd1     lvm2 [350.00 GB]&lt;BR /&gt;  PV /dev/sde1     lvm2 [499.99 GB]&lt;BR /&gt;  PV /dev/sdf1     lvm2 [549.99 GB]&lt;BR /&gt;</description>
      <pubDate>Thu, 15 Jan 2009 06:54:11 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/major-vg-problems-after-mc-sg-cluster-crashed-hard/m-p/5150682#M56526</guid>
      <dc:creator>John Rayl</dc:creator>
      <dc:date>2009-01-15T06:54:11Z</dc:date>
    </item>
    <item>
      <title>Re: Major VG problems after MC/SG cluster crashed hard</title>
      <link>https://community.hpe.com/t5/operating-system-linux/major-vg-problems-after-mc-sg-cluster-crashed-hard/m-p/5150683#M56527</link>
      <description>vgscan output:&lt;BR /&gt;&lt;BR /&gt;Reading all physical volumes.  This may take a while...&lt;BR /&gt;Incorrect metadata area header checksum&lt;BR /&gt;Found volume group "VolGroup00" using metadata type lvm2&lt;BR /&gt;&lt;BR /&gt;No problems with this cluster or EVA for nearly two years then this...&lt;BR /&gt;&lt;BR /&gt;I do have very recent snapshots on the EVA but that is it.&lt;BR /&gt;&lt;BR /&gt;I need to recover this data!</description>
      <pubDate>Thu, 15 Jan 2009 06:59:14 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/major-vg-problems-after-mc-sg-cluster-crashed-hard/m-p/5150683#M56527</guid>
      <dc:creator>John Rayl</dc:creator>
      <dc:date>2009-01-15T06:59:14Z</dc:date>
    </item>
    <item>
      <title>Re: Major VG problems after MC/SG cluster crashed hard</title>
      <link>https://community.hpe.com/t5/operating-system-linux/major-vg-problems-after-mc-sg-cluster-crashed-hard/m-p/5150684#M56528</link>
      <description>Multiple hardware failures. Ouch.&lt;BR /&gt;&lt;BR /&gt;"Incorrect metadata area header checksum" sounds like the system has crashed at the moment it has been updating LVM structures. &lt;BR /&gt;&lt;BR /&gt;You'll need a LVM metadata consistency check (vgck). The vgscan command will only pick up intact VGs.&lt;BR /&gt;&lt;BR /&gt;What does "vgck -v" report?&lt;BR /&gt;&lt;BR /&gt;For more diagnostics, you might run "vgscan -vvv". The output may be rather large: redirect it to a file.&lt;BR /&gt;&lt;BR /&gt;To check for ServiceGuard's VG lock tags, the command "vgs -o +tags" might be useful too. If the node that was currently running the package has crashed, the lock tag may be still in place: you may have to remove the tag if you want to activate the VG manually.&lt;BR /&gt;&lt;BR /&gt;MK</description>
      <pubDate>Thu, 15 Jan 2009 07:21:48 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/major-vg-problems-after-mc-sg-cluster-crashed-hard/m-p/5150684#M56528</guid>
      <dc:creator>Matti_Kurkela</dc:creator>
      <dc:date>2009-01-15T07:21:48Z</dc:date>
    </item>
    <item>
      <title>Re: Major VG problems after MC/SG cluster crashed hard</title>
      <link>https://community.hpe.com/t5/operating-system-linux/major-vg-problems-after-mc-sg-cluster-crashed-hard/m-p/5150685#M56529</link>
      <description>&lt;!--!*#--&gt;Wow, thanks for the quick reply Matti!&lt;BR /&gt;&lt;BR /&gt;Here is output from your suggestions:&lt;BR /&gt;&lt;BR /&gt;vgck -v&lt;BR /&gt;  Finding all volume groups&lt;BR /&gt;  Incorrect metadata area header checksum&lt;BR /&gt;  Finding volume group "VolGroup00"&lt;BR /&gt;&lt;BR /&gt;vgs -o +tags&lt;BR /&gt;  Incorrect metadata area header checksum&lt;BR /&gt;  VG  #PV #LV #SN Attr   VSize   VFree  VG Tags&lt;BR /&gt;  VolGroup00  1   2   0 wz--n- 136.56G 96.00M        &lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Thu, 15 Jan 2009 07:44:12 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/major-vg-problems-after-mc-sg-cluster-crashed-hard/m-p/5150685#M56529</guid>
      <dc:creator>John Rayl</dc:creator>
      <dc:date>2009-01-15T07:44:12Z</dc:date>
    </item>
    <item>
      <title>Re: Major VG problems after MC/SG cluster crashed hard</title>
      <link>https://community.hpe.com/t5/operating-system-linux/major-vg-problems-after-mc-sg-cluster-crashed-hard/m-p/5150686#M56530</link>
      <description>Output from the vgscan -vvv is attached in my message above.</description>
      <pubDate>Thu, 15 Jan 2009 07:45:31 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/major-vg-problems-after-mc-sg-cluster-crashed-hard/m-p/5150686#M56530</guid>
      <dc:creator>John Rayl</dc:creator>
      <dc:date>2009-01-15T07:45:31Z</dc:date>
    </item>
    <item>
      <title>Re: Major VG problems after MC/SG cluster crashed hard</title>
      <link>https://community.hpe.com/t5/operating-system-linux/major-vg-problems-after-mc-sg-cluster-crashed-hard/m-p/5150687#M56531</link>
      <description>the applicable part to the vgscan -vvv:&lt;BR /&gt;&lt;BR /&gt;Opened /dev/sdb RO&lt;BR /&gt;      /dev/sdb: size is 2097152000 sectors&lt;BR /&gt;        /dev/sdb: block size is 4096 bytes&lt;BR /&gt;        /dev/sdb: Skipping: Partition table signature found&lt;BR /&gt;        Closed /dev/sdb&lt;BR /&gt;        /dev/md16: Skipping (sysfs)&lt;BR /&gt;        Opened /dev/sdb1 RO&lt;BR /&gt;      /dev/sdb1: size is 2097141102 sectors&lt;BR /&gt;        Closed /dev/sdb1&lt;BR /&gt;      /dev/sdb1: size is 2097141102 sectors&lt;BR /&gt;        Opened /dev/sdb1 RO O_DIRECT&lt;BR /&gt;        /dev/sdb1: block size is 1024 bytes&lt;BR /&gt;        Closed /dev/sdb1&lt;BR /&gt;        Using /dev/sdb1&lt;BR /&gt;        Opened /dev/sdb1 RO O_DIRECT&lt;BR /&gt;        /dev/sdb1: block size is 1024 bytes&lt;BR /&gt;      /dev/sdb1: lvm2 label detected&lt;BR /&gt;        lvmcache: /dev/sdb1: now orphaned&lt;BR /&gt;        Closed /dev/sdb1&lt;BR /&gt;        Opened /dev/sdc RO&lt;BR /&gt;      /dev/sdc: size is 1887436800 sectors&lt;BR /&gt;        /dev/sdc: block size is 4096 bytes&lt;BR /&gt;        /dev/sdc: Skipping: Partition table signature found&lt;BR /&gt;        Closed /dev/sdc&lt;BR /&gt;        Opened /dev/sdc1 RO&lt;BR /&gt;      /dev/sdc1: size is 1887428592 sectors&lt;BR /&gt;        Closed /dev/sdc1&lt;BR /&gt;      /dev/sdc1: size is 1887428592 sectors&lt;BR /&gt;        Opened /dev/sdc1 RO O_DIRECT&lt;BR /&gt;        /dev/sdc1: block size is 4096 bytes&lt;BR /&gt;        Closed /dev/sdc1&lt;BR /&gt;        Using /dev/sdc1&lt;BR /&gt;        Opened /dev/sdc1 RO O_DIRECT&lt;BR /&gt;        /dev/sdc1: block size is 4096 bytes&lt;BR /&gt;      /dev/sdc1: lvm2 label detected&lt;BR /&gt;        lvmcache: /dev/sdc1: now orphaned&lt;BR /&gt;        Closed /dev/sdc1&lt;BR /&gt;        Opened /dev/sdd RO&lt;BR /&gt;      /dev/sdd: size is 734003200 sectors&lt;BR /&gt;        /dev/sdd: block size is 4096 bytes&lt;BR /&gt;        /dev/sdd: Skipping: Partition table signature found&lt;BR /&gt;        Closed /dev/sdd&lt;BR /&gt;        Opened /dev/sdd1 RO&lt;BR /&gt;      /dev/sdd1: size is 733993722 sectors&lt;BR /&gt;        Closed /dev/sdd1&lt;BR /&gt;      /dev/sdd1: size is 733993722 sectors&lt;BR /&gt;        Opened /dev/sdd1 RO O_DIRECT&lt;BR /&gt;        /dev/sdd1: block size is 1024 bytes&lt;BR /&gt;        Closed /dev/sdd1&lt;BR /&gt;        Using /dev/sdd1&lt;BR /&gt;        Opened /dev/sdd1 RO O_DIRECT&lt;BR /&gt;        /dev/sdd1: block size is 1024 bytes&lt;BR /&gt;      /dev/sdd1: lvm2 label detected&lt;BR /&gt;        lvmcache: /dev/sdd1: now orphaned&lt;BR /&gt;        Closed /dev/sdd1&lt;BR /&gt;        Opened /dev/sde RO&lt;BR /&gt;      /dev/sde: size is 1048576000 sectors&lt;BR /&gt;        /dev/sde: block size is 4096 bytes&lt;BR /&gt;        /dev/sde: Skipping: Partition table signature found&lt;BR /&gt;        Closed /dev/sde&lt;BR /&gt;        Opened /dev/sde1 RO&lt;BR /&gt;      /dev/sde1: size is 1048562487 sectors&lt;BR /&gt;        Closed /dev/sde1&lt;BR /&gt;      /dev/sde1: size is 1048562487 sectors&lt;BR /&gt;        Opened /dev/sde1 RO O_DIRECT&lt;BR /&gt;        /dev/sde1: block size is 512 bytes&lt;BR /&gt;        Closed /dev/sde1&lt;BR /&gt;        Using /dev/sde1&lt;BR /&gt;        Opened /dev/sde1 RO O_DIRECT&lt;BR /&gt;        /dev/sde1: block size is 512 bytes&lt;BR /&gt;      /dev/sde1: lvm2 label detected&lt;BR /&gt;        lvmcache: /dev/sde1: now orphaned&lt;BR /&gt;        Closed /dev/sde1&lt;BR /&gt;        Opened /dev/sdf RO&lt;BR /&gt;      /dev/sdf: size is 1153433600 sectors&lt;BR /&gt;        /dev/sdf: block size is 4096 bytes&lt;BR /&gt;        /dev/sdf: Skipping: Partition table signature found&lt;BR /&gt;        Closed /dev/sdf&lt;BR /&gt;        Opened /dev/sdf1 RO&lt;BR /&gt;      /dev/sdf1: size is 1153418742 sectors&lt;BR /&gt;        Closed /dev/sdf1&lt;BR /&gt;      /dev/sdf1: size is 1153418742 sectors&lt;BR /&gt;        Opened /dev/sdf1 RO O_DIRECT&lt;BR /&gt;        /dev/sdf1: block size is 1024 bytes&lt;BR /&gt;        Closed /dev/sdf1&lt;BR /&gt;        Using /dev/sdf1&lt;BR /&gt;        Opened /dev/sdf1 RO O_DIRECT&lt;BR /&gt;        /dev/sdf1: block size is 1024 bytes&lt;BR /&gt;      /dev/sdf1: lvm2 label detected&lt;BR /&gt;        lvmcache: /dev/sdf1: now orphaned&lt;BR /&gt;        Closed /dev/sdf1&lt;BR /&gt;</description>
      <pubDate>Thu, 15 Jan 2009 07:49:15 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/major-vg-problems-after-mc-sg-cluster-crashed-hard/m-p/5150687#M56531</guid>
      <dc:creator>John Rayl</dc:creator>
      <dc:date>2009-01-15T07:49:15Z</dc:date>
    </item>
    <item>
      <title>Re: Major VG problems after MC/SG cluster crashed hard</title>
      <link>https://community.hpe.com/t5/operating-system-linux/major-vg-problems-after-mc-sg-cluster-crashed-hard/m-p/5150688#M56532</link>
      <description>SWEEEEETT!! I figured it out! I just resurrected THE DEAD! &lt;BR /&gt;&lt;BR /&gt;After I thought the Google well had ran dry, I went back one more time and found this little gem:&lt;BR /&gt;&lt;BR /&gt;pvcreate --uuid "cqH4SD-VrCw-jMsN-GcwH-omCq-ThpE-dO9KmJ" --restorefile /etc/lvm/backup/vg_04 /dev/sdd1 &lt;BR /&gt;&lt;BR /&gt;Now this ONLY works if you have vgcfgbackups left ON! So leave it on for times like this when you must carry out digital miracles!&lt;BR /&gt;&lt;BR /&gt;IF you have the vgcfgbackups going on auto then everytime a vgchange occurs a file gets created in /etc/lvm/backups and then moved to /etc/lvm/archives.&lt;BR /&gt;&lt;BR /&gt;Look for the latest one in either spot, look in the file for the id near the /dev/sdxx device file name, under the "physical_volumes" section:&lt;BR /&gt;&lt;BR /&gt;physical_volumes {&lt;BR /&gt;&lt;BR /&gt;  pv0 {&lt;BR /&gt;   id = "cqH4SD-VrCw-jMsN-GcwH-omCq-ThpE-dO9KmJ"&lt;BR /&gt;   device = "/dev/sdd1" # Hint only&lt;BR /&gt;&lt;BR /&gt;   status = ["ALLOCATABLE"]&lt;BR /&gt;   pe_start = 384&lt;BR /&gt;   pe_count = 11199 # 349.969 Gigabytes&lt;BR /&gt;  }&lt;BR /&gt; }&lt;BR /&gt;&lt;BR /&gt;So you run(example uuid): &lt;BR /&gt;pvcreate --uuid "cqH4SD-VrCw-jMsN-GcwH-omCq-ThpE-dO9KmJ" --restorefile /etc/lvm/backup/vg_04 /dev/sdd1&lt;BR /&gt;&lt;BR /&gt;vgcfgrestore vg_04&lt;BR /&gt;&lt;BR /&gt;Activation tags if you use em in your SGLX cluster: vgchange --addtag machine.name.com vg04 &lt;BR /&gt;&lt;BR /&gt;vgchange -a y vg04&lt;BR /&gt;&lt;BR /&gt;fsck /dev/vg04/lvolxxx&lt;BR /&gt;&lt;BR /&gt;mount /dev/vg04/lvolxxx /back/from/the/dead&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;This is a must have for your bag of digital healing!.&lt;BR /&gt;&lt;BR /&gt;Quick somebody give me some points for figuring this out!</description>
      <pubDate>Thu, 15 Jan 2009 09:41:53 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/major-vg-problems-after-mc-sg-cluster-crashed-hard/m-p/5150688#M56532</guid>
      <dc:creator>John Rayl</dc:creator>
      <dc:date>2009-01-15T09:41:53Z</dc:date>
    </item>
    <item>
      <title>Re: Major VG problems after MC/SG cluster crashed hard</title>
      <link>https://community.hpe.com/t5/operating-system-linux/major-vg-problems-after-mc-sg-cluster-crashed-hard/m-p/5150689#M56533</link>
      <description>Thanks for looking.</description>
      <pubDate>Fri, 16 Jan 2009 01:55:20 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/major-vg-problems-after-mc-sg-cluster-crashed-hard/m-p/5150689#M56533</guid>
      <dc:creator>John Rayl</dc:creator>
      <dc:date>2009-01-16T01:55:20Z</dc:date>
    </item>
    <item>
      <title>Re: Major VG problems after MC/SG cluster crashed hard</title>
      <link>https://community.hpe.com/t5/operating-system-linux/major-vg-problems-after-mc-sg-cluster-crashed-hard/m-p/5150690#M56534</link>
      <description>SWEET too!</description>
      <pubDate>Fri, 25 Sep 2009 21:20:14 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/major-vg-problems-after-mc-sg-cluster-crashed-hard/m-p/5150690#M56534</guid>
      <dc:creator>Zinky</dc:creator>
      <dc:date>2009-09-25T21:20:14Z</dc:date>
    </item>
  </channel>
</rss>

