<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: Why doesn't /dev/vg01 get created? Can't keep new volume groups! in Operating System - Linux</title>
    <link>https://community.hpe.com/t5/operating-system-linux/why-doesn-t-dev-vg01-get-created-can-t-keep-new-volume-groups/m-p/5107763#M49693</link>
    <description>vg00 looks fine, maybe also vg02. If your system partitions are on vg00 only, your system should be bootable.&lt;BR /&gt;&lt;BR /&gt;What kind of storage system are you using? SCSI or FibreChannel? If it is accessible through multiple HBAs, can the storage system use all its interfaces simultaneously or does it need something (special signals or just time) to switch from the "active" interface to the "spare" interface?&lt;BR /&gt;&lt;BR /&gt;Can you identify the PVs that are supposed to contain the vg01 in any other way? For example, do you know one of them should be e.g. /dev/sdc or whatever? If you know the devices, what does "pvdisplay -v &lt;DEVICENAME&gt;" report on them? &lt;BR /&gt;&lt;BR /&gt;If it does not have valid Linux LVM PV information, then what does it have? Use "fdisk -l &lt;DEVICENAME&gt;" to see if it has a partition table.&lt;BR /&gt;&lt;BR /&gt;Maybe someone or something has managed to overwrite the LVM identifier on the PVs. These identifiers are created on the PVs when "pvcreate" command is used, and these identifiers are written into LVM configuration data when vgcreate or vgextend is used to create/extend a VG onto new PVs.&lt;BR /&gt;&lt;BR /&gt;Is this storage system dedicated to this server, or shared between multiple servers? &lt;BR /&gt;&lt;BR /&gt;This looks suspiciously like a zoning error in a shared FibreChannel storage system, where two unrelated systems are unintentionally allowed to use the same disk area(s). As each system will assume it's the only one writing to the disk at any given time, this will inevitably lead to disk corruption. &lt;BR /&gt;&lt;BR /&gt;(Server A reads something from the disk and caches it. Then it does some write operations which are cached. Then server B writes to the same disk area; server A does not know its cache is now stale. When server A flushes its write cache, the stale data overwrites whatever server B wrote.)&lt;BR /&gt;&lt;BR /&gt;MK&lt;/DEVICENAME&gt;&lt;/DEVICENAME&gt;</description>
    <pubDate>Fri, 09 May 2008 09:35:41 GMT</pubDate>
    <dc:creator>Matti_Kurkela</dc:creator>
    <dc:date>2008-05-09T09:35:41Z</dc:date>
    <item>
      <title>Why doesn't /dev/vg01 get created? Can't keep new volume groups!</title>
      <link>https://community.hpe.com/t5/operating-system-linux/why-doesn-t-dev-vg01-get-created-can-t-keep-new-volume-groups/m-p/5107756#M49686</link>
      <description>&lt;!--!*#--&gt;Every new volume group I create disappears after a reboot, and /dev/vg01 never gets created.  I'm using the same methodology I use on x86.  What the heck's going on here?  Help?&lt;BR /&gt;&lt;BR /&gt;I'm running RHEL4 on IA64 and I'm wondering if there's some step I missing for creating volume groups.&lt;BR /&gt;&lt;BR /&gt;The OS is installed as you'd expect on vg00, which shows up normally as /dev/vg00/*.&lt;BR /&gt;&lt;BR /&gt;However every other volume group I create disappears after a reboot.  Even BEFORE the reboot, even though the volume group is working fine, the corresponding /dev entries don't exist and 'dmsetup ls' only lists vg00.&lt;BR /&gt;&lt;BR /&gt;Witness:&lt;BR /&gt;[root@inxx24 ~]# vgcreate -s 32M vg02 /dev/sdb /dev/sdp&lt;BR /&gt;  Volume group "vg02" successfully created&lt;BR /&gt;[root@inxx24 ~]# vgdisplay -v /dev/vg02&lt;BR /&gt;    Using volume group(s) on command line&lt;BR /&gt;    Finding volume group "vg02"&lt;BR /&gt;  --- Volume group ---&lt;BR /&gt;  VG Name               vg02&lt;BR /&gt;  System ID&lt;BR /&gt;  Format                lvm2&lt;BR /&gt;  Metadata Areas        2&lt;BR /&gt;  Metadata Sequence No  1&lt;BR /&gt;  VG Access             read/write&lt;BR /&gt;  VG Status             resizable&lt;BR /&gt;  MAX LV                0&lt;BR /&gt;  Cur LV                0&lt;BR /&gt;  Open LV               0&lt;BR /&gt;  Max PV                0&lt;BR /&gt;  Cur PV                2&lt;BR /&gt;  Act PV                2&lt;BR /&gt;  VG Size               273.44 GB&lt;BR /&gt;  PE Size               32.00 MB&lt;BR /&gt;  Total PE              8750&lt;BR /&gt;  Alloc PE / Size       0 / 0&lt;BR /&gt;  Free  PE / Size       8750 / 273.44 GB&lt;BR /&gt;  VG UUID               2r65jB-tXZ5-2Y1I-8876-11aj-eZRU-dwPc00&lt;BR /&gt;&lt;BR /&gt;  --- Physical volumes ---&lt;BR /&gt;  PV Name               /dev/sdb&lt;BR /&gt;  PV UUID               i7Gwyl-AqNF-OYJ2-Nv61-HzyA-8sae-Mfzted&lt;BR /&gt;  PV Status             allocatable&lt;BR /&gt;  Total PE / Free PE    4375 / 4375&lt;BR /&gt;&lt;BR /&gt;  PV Name               /dev/sdp&lt;BR /&gt;  PV UUID               UhCFGR-PVD4-CK6Z-RTeE-80dA-g7yh-dHtadP&lt;BR /&gt;  PV Status             allocatable&lt;BR /&gt;  Total PE / Free PE    4375 / 4375&lt;BR /&gt;&lt;BR /&gt;[root@inxx24 ~]# ls -l /dev/vg02&lt;BR /&gt;ls: /dev/vg02: No such file or directory&lt;BR /&gt;&lt;BR /&gt;Hmmm… maybe we just need to create the directory first ourselves?&lt;BR /&gt;&lt;BR /&gt;[root@inxx24 backup]# vgremove vg02&lt;BR /&gt;  Volume group "vg02" successfully removed&lt;BR /&gt;[root@inxx24 backup]# mkdir /dev/vg02&lt;BR /&gt;[root@inxx24 backup]# vgcreate -s 32M /dev/vg02 /dev/sdb /dev/sdp&lt;BR /&gt;  /dev/vg02: already exists in filesystem&lt;BR /&gt;  New volume group name "vg02" is invalid&lt;BR /&gt;[root@inxx24 backup]# ls /dev/vg02&lt;BR /&gt;[root@inxx24 backup]# rmdir /dev/vg02&lt;BR /&gt;[root@inxx24 backup]# vgcreate -s 32M /dev/vg02 /dev/sdb /dev/sdp&lt;BR /&gt;  Volume group "vg02" successfully created&lt;BR /&gt;[root@inxx24 backup]# ls /dev/vg02&lt;BR /&gt;ls: /dev/vg02: No such file or directory&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Thu, 08 May 2008 00:34:06 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/why-doesn-t-dev-vg01-get-created-can-t-keep-new-volume-groups/m-p/5107756#M49686</guid>
      <dc:creator>Trever Furnish</dc:creator>
      <dc:date>2008-05-08T00:34:06Z</dc:date>
    </item>
    <item>
      <title>Re: Why doesn't /dev/vg01 get created? Can't keep new volume groups!</title>
      <link>https://community.hpe.com/t5/operating-system-linux/why-doesn-t-dev-vg01-get-created-can-t-keep-new-volume-groups/m-p/5107757#M49687</link>
      <description>In Linux LVM2, the device nodes are created when the VG is activated, not when it's created. &lt;BR /&gt;&lt;BR /&gt;If there are no LVs defined in the new VG, there is no need to create any device nodes, so the system may omit the creation of the /dev/vg02 directory too.&lt;BR /&gt;&lt;BR /&gt;After a successful vgcreate, use lvcreate to set up a LV inside the VG. &lt;BR /&gt;&lt;BR /&gt;After a reboot, the same caveat applies: if there are no LVs defined in the VG, you won't get any device nodes. &lt;BR /&gt;&lt;BR /&gt;The "dmsetup ls" command lists only *active* LVs: if a VG is not activated, dmsetup does not know anything about the VG nor its LVs. This is by design: activating a VG *means* setting up the appropriate PV(s) -&amp;gt; LV(s) mapping to the device-mapper subsystem.&lt;BR /&gt;&lt;BR /&gt;To query for inactive VGs, use the command "vgs".&lt;BR /&gt;&lt;BR /&gt;If the device nodes get deleted or corrupted for some reason, you can use "vgscan --mknodes" to re-create them without rebooting.&lt;BR /&gt;&lt;BR /&gt;MK&lt;BR /&gt;</description>
      <pubDate>Thu, 08 May 2008 06:40:07 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/why-doesn-t-dev-vg01-get-created-can-t-keep-new-volume-groups/m-p/5107757#M49687</guid>
      <dc:creator>Matti_Kurkela</dc:creator>
      <dc:date>2008-05-08T06:40:07Z</dc:date>
    </item>
    <item>
      <title>Re: Why doesn't /dev/vg01 get created? Can't keep new volume groups!</title>
      <link>https://community.hpe.com/t5/operating-system-linux/why-doesn-t-dev-vg01-get-created-can-t-keep-new-volume-groups/m-p/5107758#M49688</link>
      <description>also try just "vgscan" after the reboot. That should scan all disks and then try "vgchange -a y vg01"</description>
      <pubDate>Thu, 08 May 2008 09:19:41 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/why-doesn-t-dev-vg01-get-created-can-t-keep-new-volume-groups/m-p/5107758#M49688</guid>
      <dc:creator>skt_skt</dc:creator>
      <dc:date>2008-05-08T09:19:41Z</dc:date>
    </item>
    <item>
      <title>Re: Why doesn't /dev/vg01 get created? Can't keep new volume groups!</title>
      <link>https://community.hpe.com/t5/operating-system-linux/why-doesn-t-dev-vg01-get-created-can-t-keep-new-volume-groups/m-p/5107759#M49689</link>
      <description>Shalom,&lt;BR /&gt;&lt;BR /&gt;You have run into a bug.&lt;BR /&gt;&lt;BR /&gt;Few ways around it.&lt;BR /&gt;&lt;BR /&gt;You can mount the volume group in the init scripts of the system.&lt;BR /&gt;&lt;BR /&gt;The official fix is to create a new boot ramdisk image after you have created the volume group but not rebooted the system.&lt;BR /&gt;&lt;BR /&gt;backup the old image in /boot&lt;BR /&gt;&lt;BR /&gt;mkinitrd boor/initrd-$(uname -r).img $(uname -r)&lt;BR /&gt;&lt;BR /&gt;SEP</description>
      <pubDate>Thu, 08 May 2008 09:20:23 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/why-doesn-t-dev-vg01-get-created-can-t-keep-new-volume-groups/m-p/5107759#M49689</guid>
      <dc:creator>Steven E. Protter</dc:creator>
      <dc:date>2008-05-08T09:20:23Z</dc:date>
    </item>
    <item>
      <title>Re: Why doesn't /dev/vg01 get created? Can't keep new volume groups!</title>
      <link>https://community.hpe.com/t5/operating-system-linux/why-doesn-t-dev-vg01-get-created-can-t-keep-new-volume-groups/m-p/5107760#M49690</link>
      <description>&lt;!--!*#--&gt;Thanks for the comments.&lt;BR /&gt;&lt;BR /&gt;I didn't realize the would be no /dev/vg02 until an lvol was allocated, but that does explain that symptom.  However my example confused the issue a bit -- in my example I showed vg02, which had no lvols created, but the real problem child was vg01, which was created in the same manner and DID have an lvol.  There's no /dev/vg01 after a reboot.&lt;BR /&gt;&lt;BR /&gt;SEP, thanks very much for the example.  I'm still not sure it's a fix, but I'll probably find out momentarily.&lt;BR /&gt;&lt;BR /&gt;I should have also noted that vgscan *does* show vg01 items, but it lists two lines saying it can't find a PV (and there were two PVs in the vg), and the UUID in both lines is identical (unless I'm going blind).&lt;BR /&gt;&lt;BR /&gt;[root@inxx24 ~]# vgscan&lt;BR /&gt;  Reading all physical volumes.  This may take a while...&lt;BR /&gt;  Found volume group "vg00" using metadata type lvm2&lt;BR /&gt;  Found volume group "vg02" using metadata type lvm2&lt;BR /&gt;  Couldn't find device with uuid 'dnFmXk-kaKp-q4Xm-cw2d-3d7K-Hpg6-dSMcnX'.&lt;BR /&gt;  Couldn't find all physical volumes for volume group vg01.&lt;BR /&gt;  Couldn't find device with uuid 'dnFmXk-kaKp-q4Xm-cw2d-3d7K-Hpg6-dSMcnX'.&lt;BR /&gt;  Couldn't find all physical volumes for volume group vg01.&lt;BR /&gt;  Volume group "vg01" not found&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;Also vgdisplay for vg00 doesn't exactly show nothing:&lt;BR /&gt;[root@inxx24 ~]# vgdisplay vg01&lt;BR /&gt;  Couldn't find device with uuid 'dnFmXk-kaKp-q4Xm-cw2d-3d7K-Hpg6-dSMcnX'.&lt;BR /&gt;  Couldn't find all physical volumes for volume group vg01.&lt;BR /&gt;  Couldn't find device with uuid 'dnFmXk-kaKp-q4Xm-cw2d-3d7K-Hpg6-dSMcnX'.&lt;BR /&gt;  Couldn't find all physical volumes for volume group vg01.&lt;BR /&gt;  Volume group "vg01" not found&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;The two drives in question are still perfectly fine AFAIK.  The only difference I know of that occurred during the creation of the PVs was that for the two disks in vg01 we used fdisk to delete the existing single partition, whereas for the two disks in vg02 I used "dd if=/dev/zero of=/dev/sdp bs=512 count=1".&lt;BR /&gt;&lt;BR /&gt;Also -- sorry there are so many little details! -- I've determined that vg02 DOES work fine after a reboot.  Only vg01 has problems.  The lack of /dev/vg02 was just a red herring.&lt;BR /&gt;&lt;BR /&gt;I have 28 disks to play with here, so I've got plenty of room to test.  I'm going to create a few more test volumes and see what happens following reboots.&lt;BR /&gt;&lt;BR /&gt;We actually only rebooted because we wanted to find out whether lvm2 mirroring with "--corelog" really requires copying a whole lvol after every reboot.  I should probably have noted that the lvol on vg01 was created with "-m 1 --corelog" (lvcreate options), which I guess could be related.  Probably going to go back to md for raid1 instead. :-(&lt;BR /&gt;</description>
      <pubDate>Thu, 08 May 2008 13:37:14 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/why-doesn-t-dev-vg01-get-created-can-t-keep-new-volume-groups/m-p/5107760#M49690</guid>
      <dc:creator>Trever Furnish</dc:creator>
      <dc:date>2008-05-08T13:37:14Z</dc:date>
    </item>
    <item>
      <title>Re: Why doesn't /dev/vg01 get created? Can't keep new volume groups!</title>
      <link>https://community.hpe.com/t5/operating-system-linux/why-doesn-t-dev-vg01-get-created-can-t-keep-new-volume-groups/m-p/5107761#M49691</link>
      <description>"Couldn't find device with uuid 'dnFmXk-kaKp-q4Xm-cw2d-3d7K-Hpg6-dSMcnX'" above erros makes this so intersting.&lt;BR /&gt;&lt;BR /&gt;do a backup as mentioned below.&lt;BR /&gt;&lt;BR /&gt;$ mv /etc/lvm/archive/* /root/lvm/archive/&lt;BR /&gt;$ mv /etc/lvm/backup/* /root/lvm/backup&lt;BR /&gt;$ rm /etc/lvm/cache/.cache&lt;BR /&gt;&lt;BR /&gt;4) $ pvscan&lt;BR /&gt;5) $ vgscan&lt;BR /&gt;6) $ lvscan &lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;If this is the issue we have seen previously, the problem is that clvmd detects change, reloads configuration but doesn't refresh cache properly. So workaround for now is - after every lvm.conf change run pvscan (vgscan).  &lt;BR /&gt;&lt;BR /&gt;As usual dont try anythinng in PROD w/o testing in DEV/TEST env. Let us know if your get a chance to test this</description>
      <pubDate>Thu, 08 May 2008 18:35:46 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/why-doesn-t-dev-vg01-get-created-can-t-keep-new-volume-groups/m-p/5107761#M49691</guid>
      <dc:creator>skt_skt</dc:creator>
      <dc:date>2008-05-08T18:35:46Z</dc:date>
    </item>
    <item>
      <title>Re: Why doesn't /dev/vg01 get created? Can't keep new volume groups!</title>
      <link>https://community.hpe.com/t5/operating-system-linux/why-doesn-t-dev-vg01-get-created-can-t-keep-new-volume-groups/m-p/5107762#M49692</link>
      <description>&lt;!--!*#--&gt;Um... thanks? :-)  Did I just turn my box-that-loses-one-filesystem-on-boot into a box-that-won't-boot?  Take a look below, let me know whether you think it's save to reboot without restoring /etc/lvm -- not that I'll hold you to it if you're wrong...&lt;BR /&gt;&lt;BR /&gt;Note that the commands still complained about the missing pv, but /etc/lvm no longer contains any trace of vg01 (although it did before I removed files).&lt;BR /&gt;&lt;BR /&gt;[root@inxx24 ~]# tar cvpf /root/lvm.tar /etc/lvm&lt;BR /&gt;tar: Removing leading `/' from member names&lt;BR /&gt;/etc/lvm/&lt;BR /&gt;/etc/lvm/lvm.conf&lt;BR /&gt;/etc/lvm/archive/&lt;BR /&gt;/etc/lvm/archive/vg02_00001.vg&lt;BR /&gt;/etc/lvm/archive/vg02_00002.vg&lt;BR /&gt;/etc/lvm/archive/vg02_00000.vg&lt;BR /&gt;/etc/lvm/archive/vg02_00003.vg&lt;BR /&gt;/etc/lvm/archive/vg01_00001.vg&lt;BR /&gt;/etc/lvm/archive/vg01_00000.vg&lt;BR /&gt;/etc/lvm/archive/vg00_00000.vg&lt;BR /&gt;/etc/lvm/cache/&lt;BR /&gt;/etc/lvm/cache/.cache&lt;BR /&gt;/etc/lvm/backup/&lt;BR /&gt;/etc/lvm/backup/vg02&lt;BR /&gt;/etc/lvm/backup/vg00&lt;BR /&gt;/etc/lvm/backup/vg01&lt;BR /&gt;[root@inxx24 ~]# rm /etc/lvm/archive/*&lt;BR /&gt;rm: remove regular file `/etc/lvm/archive/vg00_00000.vg'? y&lt;BR /&gt;rm: remove regular file `/etc/lvm/archive/vg01_00000.vg'? y&lt;BR /&gt;rm: remove regular file `/etc/lvm/archive/vg01_00001.vg'? y&lt;BR /&gt;rm: remove regular file `/etc/lvm/archive/vg02_00000.vg'? y&lt;BR /&gt;rm: remove regular file `/etc/lvm/archive/vg02_00001.vg'? y&lt;BR /&gt;rm: remove regular file `/etc/lvm/archive/vg02_00002.vg'? y&lt;BR /&gt;rm: remove regular file `/etc/lvm/archive/vg02_00003.vg'? y&lt;BR /&gt;[root@inxx24 ~]# rm /etc/lvm/backup/*&lt;BR /&gt;rm: remove regular file `/etc/lvm/backup/vg00'? y&lt;BR /&gt;rm: remove regular file `/etc/lvm/backup/vg01'? y&lt;BR /&gt;rm: remove regular file `/etc/lvm/backup/vg02'? y&lt;BR /&gt;[root@inxx24 ~]# rm /etc/lvm/cache/.cache&lt;BR /&gt;rm: remove regular file `/etc/lvm/cache/.cache'? y&lt;BR /&gt;[root@inxx24 ~]# pvscan&lt;BR /&gt;  Couldn't find device with uuid 'dnFmXk-kaKp-q4Xm-cw2d-3d7K-Hpg6-dSMcnX'.&lt;BR /&gt;  Couldn't find device with uuid 'dnFmXk-kaKp-q4Xm-cw2d-3d7K-Hpg6-dSMcnX'.&lt;BR /&gt;  PV /dev/sdac4       VG vg00   lvm2 [66.84 GB / 43.84 GB free]&lt;BR /&gt;  PV /dev/sdb         VG vg02   lvm2 [136.72 GB / 126.72 GB free]&lt;BR /&gt;  PV /dev/sdp         VG vg02   lvm2 [136.72 GB / 126.72 GB free]&lt;BR /&gt;  PV /dev/sda         VG vg01   lvm2 [136.73 GB / 5.73 GB free]&lt;BR /&gt;  PV unknown device   VG vg01   lvm2 [136.73 GB / 5.73 GB free]&lt;BR /&gt;  Total: 5 [613.74 GB] / in use: 5 [613.74 GB] / in no VG: 0 [0   ]&lt;BR /&gt;[root@inxx24 ~]# vgscan&lt;BR /&gt;  Reading all physical volumes.  This may take a while...&lt;BR /&gt;  Found volume group "vg00" using metadata type lvm2&lt;BR /&gt;  Found volume group "vg02" using metadata type lvm2&lt;BR /&gt;  Couldn't find device with uuid 'dnFmXk-kaKp-q4Xm-cw2d-3d7K-Hpg6-dSMcnX'.&lt;BR /&gt;  Couldn't find all physical volumes for volume group vg01.&lt;BR /&gt;  Couldn't find device with uuid 'dnFmXk-kaKp-q4Xm-cw2d-3d7K-Hpg6-dSMcnX'.&lt;BR /&gt;  Couldn't find all physical volumes for volume group vg01.&lt;BR /&gt;  Volume group "vg01" not found&lt;BR /&gt;[root@inxx24 ~]# lvscan&lt;BR /&gt;  ACTIVE            '/dev/vg00/lvol0' [4.00 GB] inherit&lt;BR /&gt;  ACTIVE            '/dev/vg00/lvol5' [1.00 GB] inherit&lt;BR /&gt;  ACTIVE            '/dev/vg00/lvol4' [2.00 GB] inherit&lt;BR /&gt;  ACTIVE            '/dev/vg00/lvol1' [8.00 GB] inherit&lt;BR /&gt;  ACTIVE            '/dev/vg00/lvol2' [4.00 GB] inherit&lt;BR /&gt;  ACTIVE            '/dev/vg00/lvol6' [4.00 GB] inherit&lt;BR /&gt;  ACTIVE            '/dev/vg02/uvol2' [10.00 GB] inherit&lt;BR /&gt;  Couldn't find device with uuid 'dnFmXk-kaKp-q4Xm-cw2d-3d7K-Hpg6-dSMcnX'.&lt;BR /&gt;  Couldn't find all physical volumes for volume group vg01.&lt;BR /&gt;  Couldn't find device with uuid 'dnFmXk-kaKp-q4Xm-cw2d-3d7K-Hpg6-dSMcnX'.&lt;BR /&gt;  Couldn't find all physical volumes for volume group vg01.&lt;BR /&gt;  Volume group "vg01" not found&lt;BR /&gt;[root@inxx24 ~]# ls -lR /etc/lvm&lt;BR /&gt;/etc/lvm:&lt;BR /&gt;total 44&lt;BR /&gt;drwx------  2 root root  4096 May  8 20:11 archive&lt;BR /&gt;drwx------  2 root root  4096 May  8 20:11 backup&lt;BR /&gt;drwx------  2 root root  4096 May  8 20:11 cache&lt;BR /&gt;-rw-r--r--  1 root root 15246 Aug 24  2007 lvm.conf&lt;BR /&gt;&lt;BR /&gt;/etc/lvm/archive:&lt;BR /&gt;total 8&lt;BR /&gt;-rw-------  1 root root 2549 May  8 20:11 vg00_00000.vg&lt;BR /&gt;-rw-------  1 root root 1933 May  8 20:11 vg02_00000.vg&lt;BR /&gt;&lt;BR /&gt;/etc/lvm/backup:&lt;BR /&gt;total 8&lt;BR /&gt;-rw-------  1 root root 2548 May  8 20:11 vg00&lt;BR /&gt;-rw-------  1 root root 1932 May  8 20:11 vg02&lt;BR /&gt;&lt;BR /&gt;/etc/lvm/cache:&lt;BR /&gt;total 0&lt;BR /&gt;</description>
      <pubDate>Thu, 08 May 2008 19:15:57 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/why-doesn-t-dev-vg01-get-created-can-t-keep-new-volume-groups/m-p/5107762#M49692</guid>
      <dc:creator>Trever Furnish</dc:creator>
      <dc:date>2008-05-08T19:15:57Z</dc:date>
    </item>
    <item>
      <title>Re: Why doesn't /dev/vg01 get created? Can't keep new volume groups!</title>
      <link>https://community.hpe.com/t5/operating-system-linux/why-doesn-t-dev-vg01-get-created-can-t-keep-new-volume-groups/m-p/5107763#M49693</link>
      <description>vg00 looks fine, maybe also vg02. If your system partitions are on vg00 only, your system should be bootable.&lt;BR /&gt;&lt;BR /&gt;What kind of storage system are you using? SCSI or FibreChannel? If it is accessible through multiple HBAs, can the storage system use all its interfaces simultaneously or does it need something (special signals or just time) to switch from the "active" interface to the "spare" interface?&lt;BR /&gt;&lt;BR /&gt;Can you identify the PVs that are supposed to contain the vg01 in any other way? For example, do you know one of them should be e.g. /dev/sdc or whatever? If you know the devices, what does "pvdisplay -v &lt;DEVICENAME&gt;" report on them? &lt;BR /&gt;&lt;BR /&gt;If it does not have valid Linux LVM PV information, then what does it have? Use "fdisk -l &lt;DEVICENAME&gt;" to see if it has a partition table.&lt;BR /&gt;&lt;BR /&gt;Maybe someone or something has managed to overwrite the LVM identifier on the PVs. These identifiers are created on the PVs when "pvcreate" command is used, and these identifiers are written into LVM configuration data when vgcreate or vgextend is used to create/extend a VG onto new PVs.&lt;BR /&gt;&lt;BR /&gt;Is this storage system dedicated to this server, or shared between multiple servers? &lt;BR /&gt;&lt;BR /&gt;This looks suspiciously like a zoning error in a shared FibreChannel storage system, where two unrelated systems are unintentionally allowed to use the same disk area(s). As each system will assume it's the only one writing to the disk at any given time, this will inevitably lead to disk corruption. &lt;BR /&gt;&lt;BR /&gt;(Server A reads something from the disk and caches it. Then it does some write operations which are cached. Then server B writes to the same disk area; server A does not know its cache is now stale. When server A flushes its write cache, the stale data overwrites whatever server B wrote.)&lt;BR /&gt;&lt;BR /&gt;MK&lt;/DEVICENAME&gt;&lt;/DEVICENAME&gt;</description>
      <pubDate>Fri, 09 May 2008 09:35:41 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/why-doesn-t-dev-vg01-get-created-can-t-keep-new-volume-groups/m-p/5107763#M49693</guid>
      <dc:creator>Matti_Kurkela</dc:creator>
      <dc:date>2008-05-09T09:35:41Z</dc:date>
    </item>
    <item>
      <title>Re: Why doesn't /dev/vg01 get created? Can't keep new volume groups!</title>
      <link>https://community.hpe.com/t5/operating-system-linux/why-doesn-t-dev-vg01-get-created-can-t-keep-new-volume-groups/m-p/5107764#M49694</link>
      <description>&lt;!--!*#--&gt;The devices in vg01 are /dev/sda and /dev/sdo.&lt;BR /&gt;&lt;BR /&gt;Oddly, pvdisplay produces slightly different results for each of them:&lt;BR /&gt;&lt;BR /&gt;[root@inxx24 ~]# pvdisplay -v /dev/sda&lt;BR /&gt;    Using physical volume(s) on command line&lt;BR /&gt;    Wiping cache of LVM-capable devices&lt;BR /&gt;  Couldn't find device with uuid 'dnFmXk-kaKp-q4Xm-cw2d-3d7K-Hpg6-dSMcnX'.&lt;BR /&gt;  Couldn't find all physical volumes for volume group vg01.&lt;BR /&gt;  Couldn't find device with uuid 'dnFmXk-kaKp-q4Xm-cw2d-3d7K-Hpg6-dSMcnX'.&lt;BR /&gt;  Couldn't find all physical volumes for volume group vg01.&lt;BR /&gt;  get_pv_from_vg_by_id: vg_read failed to read VG vg01&lt;BR /&gt;  Can't read : skipping&lt;BR /&gt;&lt;BR /&gt;[root@inxx24 ~]# pvdisplay -v /dev/sdo&lt;BR /&gt;    Using physical volume(s) on command line&lt;BR /&gt;  Failed to read physical volume "/dev/sdo"&lt;BR /&gt;&lt;BR /&gt;Fdisk reports no partition table:&lt;BR /&gt;[root@inxx24 ~]# fdisk -l /dev/sda&lt;BR /&gt;&lt;BR /&gt;Disk /dev/sda: 146.8 GB, 146815737856 bytes&lt;BR /&gt;255 heads, 63 sectors/track, 17849 cylinders&lt;BR /&gt;Units = cylinders of 16065 * 512 = 8225280 bytes&lt;BR /&gt;&lt;BR /&gt;Disk /dev/sda doesn't contain a valid partition table&lt;BR /&gt;&lt;BR /&gt;I'm having no problems with the other disks, so I think I'm going to see what happens if I zero out the start of these disks and recreate vg01.&lt;BR /&gt;</description>
      <pubDate>Fri, 09 May 2008 12:10:48 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/why-doesn-t-dev-vg01-get-created-can-t-keep-new-volume-groups/m-p/5107764#M49694</guid>
      <dc:creator>Trever Furnish</dc:creator>
      <dc:date>2008-05-09T12:10:48Z</dc:date>
    </item>
    <item>
      <title>Re: Why doesn't /dev/vg01 get created? Can't keep new volume groups!</title>
      <link>https://community.hpe.com/t5/operating-system-linux/why-doesn-t-dev-vg01-get-created-can-t-keep-new-volume-groups/m-p/5107765#M49695</link>
      <description>Forgot to include details on the drive system.&lt;BR /&gt;&lt;BR /&gt;There are two dual port HBAs attached via scsi to two full MSA30s with 14 disks.  Vg01 was composed of the two disks from slot 1 on each MSA.&lt;BR /&gt;&lt;BR /&gt;There are no other hosts connected.&lt;BR /&gt;</description>
      <pubDate>Fri, 09 May 2008 12:16:42 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/why-doesn-t-dev-vg01-get-created-can-t-keep-new-volume-groups/m-p/5107765#M49695</guid>
      <dc:creator>Trever Furnish</dc:creator>
      <dc:date>2008-05-09T12:16:42Z</dc:date>
    </item>
    <item>
      <title>Re: Why doesn't /dev/vg01 get created? Can't keep new volume groups!</title>
      <link>https://community.hpe.com/t5/operating-system-linux/why-doesn-t-dev-vg01-get-created-can-t-keep-new-volume-groups/m-p/5107766#M49696</link>
      <description>Zero'ing out the beginning of the drives seems to have worked:&lt;BR /&gt;&lt;BR /&gt;dd if=/dev/zero of=/dev/sda bs=1k count=1&lt;BR /&gt;dd if=/dev/zero of=/dev/sdo bs=1k count=1&lt;BR /&gt;&lt;BR /&gt;It's a bit disquieting that the vg, lvols, and filesystem were all created successfully before doing this, then disappeared after a reboot, but I guess c'est la vie.&lt;BR /&gt;&lt;BR /&gt;Just for the reference of future readers, these disks were originally part of similar volume group pairings under HPUX with HPUX's LVM.&lt;BR /&gt;</description>
      <pubDate>Fri, 09 May 2008 12:53:04 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/why-doesn-t-dev-vg01-get-created-can-t-keep-new-volume-groups/m-p/5107766#M49696</guid>
      <dc:creator>Trever Furnish</dc:creator>
      <dc:date>2008-05-09T12:53:04Z</dc:date>
    </item>
    <item>
      <title>Re: Why doesn't /dev/vg01 get created? Can't keep new volume groups!</title>
      <link>https://community.hpe.com/t5/operating-system-linux/why-doesn-t-dev-vg01-get-created-can-t-keep-new-volume-groups/m-p/5107767#M49697</link>
      <description>Thanks to everyone who helped.&lt;BR /&gt;</description>
      <pubDate>Fri, 09 May 2008 12:54:26 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/why-doesn-t-dev-vg01-get-created-can-t-keep-new-volume-groups/m-p/5107767#M49697</guid>
      <dc:creator>Trever Furnish</dc:creator>
      <dc:date>2008-05-09T12:54:26Z</dc:date>
    </item>
  </channel>
</rss>

