<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: panic: all VFS_MOUNTROOTs failed: NEED DRIVERS ????? in Operating System - HP-UX</title>
    <link>https://community.hpe.com/t5/operating-system-hp-ux/panic-all-vfs-mountroots-failed-need-drivers/m-p/4353195#M345037</link>
    <description>&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;# vgcfgrestore -f /wcw/etc/lvmconf/vg00.conf -l&lt;BR /&gt;Volume Group Configuration information in "/wcw/etc/lvmconf/vg00.conf"&lt;BR /&gt;VG Name /dev/vg00&lt;BR /&gt; ---- Physical volumes : 2 ----&lt;BR /&gt;  /dev/rdsk/c3t2d0s2 (Bootable)&lt;BR /&gt;  /dev/rdsk/c2t1d0s2 (Bootable)&lt;BR /&gt;&lt;BR /&gt;I assume that earlier you had these 2 disks as a part of vg00 and that they had been mirrored.&lt;BR /&gt;&lt;BR /&gt;WARNING: Logical volume for Dump expected but not found.&lt;BR /&gt;you are perhaps missing &lt;BR /&gt;#lvlnboot -b /dev/vg01/lvol1&lt;BR /&gt;#lvlnboot -s /dev/vg01/lvol2&lt;BR /&gt;#lvlnboot -r /devvg01/lvol3&lt;BR /&gt;#lvlnboot -d /dev/vg01/lvol2&lt;BR /&gt;Try booting normally using the disk that is   /dev/rdsk/c3t2d0s2 &lt;BR /&gt;&lt;BR /&gt;and as u said that you are able to access the disk   /dev/rdsk/c2t1d0s2, do the following&lt;BR /&gt;&lt;BR /&gt;once the system comes up you can &lt;BR /&gt;#cd /etc/lvmconf&lt;BR /&gt;keep a copy all the LVM Cinfigurations there by appending to the name of exch conf file your name&lt;BR /&gt;eg&lt;BR /&gt;cp -p vg00.conf vg00.conf.mark&lt;BR /&gt;cp -p vg00.conf.old vg00.conf.old.mark&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;#vgexport /dev/vg01&lt;BR /&gt;#mkdir /dev/newroot&lt;BR /&gt;#mknod /dev/newroot/group c 64 0x010000&lt;BR /&gt;#vgimport /dev/vg01 /dev/dsk/c2t1d0s2&lt;BR /&gt;Reduce the other disk if agin you find as ghost disk in vg01.&lt;BR /&gt;#lvlnboot -b /dev/vg01/lvol1&lt;BR /&gt;#lvlnboot -s /dev/vg01/lvol2&lt;BR /&gt;#lvlnboot -r /dev/vg01/lvol3&lt;BR /&gt;#lvlnboot -v&lt;BR /&gt;#lvlnboot -R&lt;BR /&gt;#lvlnboot - v&lt;BR /&gt;#echo "boot vmunix -lq" &amp;gt;/tmp/AUTO&lt;BR /&gt;#efi_cp -d /dev/rdsk/c2t1d0s2 /tmp/AUTO/EFI&lt;BR /&gt;#setboot -p &lt;HW_PATH_OF_&gt;&lt;BR /&gt;#setboot -h &lt;HW_PATH_OF_&gt; -a &lt;HW_PATH_OF_&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;and set the Disk /dev/rdsk/c2t1d0s2&amp;gt; as thr Pry boot path.&lt;BR /&gt;&lt;BR /&gt;Start in LVM Maint mode&lt;BR /&gt;#shutdown -ry 00&lt;BR /&gt;&lt;BR /&gt;Once in LVM maint mode : (you should get the system boot in LVM Maint mode as lvlnboot has corrected the boot and swap for the disk)&lt;BR /&gt;&lt;BR /&gt;#vgexport /dev/vg00&lt;BR /&gt;#vgexport /dev/vg01&lt;BR /&gt;#mkdir /dev/vg00&lt;BR /&gt;#mknod /dev/vg00/group c 64 0x000000&lt;BR /&gt;#vgimport /dev/vg00 /dev/dsk/c2t1d0s2&lt;BR /&gt;&lt;BR /&gt;#lvlnboot -v&lt;BR /&gt;do this again for the boot,swap,root and dumo if reqd.&lt;BR /&gt;#lvlnboot -R&lt;BR /&gt;&lt;BR /&gt;try&lt;BR /&gt;&lt;BR /&gt;vgchange -a y vg00&lt;BR /&gt;#mount -a&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;ot reboot&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;regards&lt;BR /&gt;sujit&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;/HW_PATH_OF_&gt;&lt;/HW_PATH_OF_&gt;&lt;/HW_PATH_OF_&gt;</description>
    <pubDate>Fri, 06 Feb 2009 02:45:04 GMT</pubDate>
    <dc:creator>sujit kumar singh</dc:creator>
    <dc:date>2009-02-06T02:45:04Z</dc:date>
    <item>
      <title>panic: all VFS_MOUNTROOTs failed: NEED DRIVERS ?????</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/panic-all-vfs-mountroots-failed-need-drivers/m-p/4353194#M345036</link>
      <description>&lt;!--!*#--&gt;I made an incredibly stupid mistake on an rx2620 HP-UX 11.23 with mirrored disks. I executed&lt;BR /&gt;&lt;BR /&gt;# lvremove /dev/vg00/lvol14&lt;BR /&gt;&lt;BR /&gt;in single-user mode (not LVM maintenance mode). lvol14 was being used as secondary swap. The command errored out with an error I stupidly failed to record. When I shut down, it would not boot to LVM maintenance mode.&lt;BR /&gt;&lt;BR /&gt;I don't have the luxury of an Ignite server. I reloaded the O/S from DVD to one of the disks (c3t2d0). I was able to access file systems of the formerly mirrored disk (c2t1d0), no problem. But it still won't boot to LVM maintenance mode off of c2t1d0 (I realize I still need to fix vg00.conf). [See attached for more detailed information.]&lt;BR /&gt;&lt;BR /&gt;I would very much like to get the disk bootable again. The other postings with this panic describe conditions that don't seem to apply here, unless I missed something. I would greatly appreciate any suggestions provided.</description>
      <pubDate>Thu, 05 Feb 2009 22:06:25 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/panic-all-vfs-mountroots-failed-need-drivers/m-p/4353194#M345036</guid>
      <dc:creator>WWarren</dc:creator>
      <dc:date>2009-02-05T22:06:25Z</dc:date>
    </item>
    <item>
      <title>Re: panic: all VFS_MOUNTROOTs failed: NEED DRIVERS ?????</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/panic-all-vfs-mountroots-failed-need-drivers/m-p/4353195#M345037</link>
      <description>&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;# vgcfgrestore -f /wcw/etc/lvmconf/vg00.conf -l&lt;BR /&gt;Volume Group Configuration information in "/wcw/etc/lvmconf/vg00.conf"&lt;BR /&gt;VG Name /dev/vg00&lt;BR /&gt; ---- Physical volumes : 2 ----&lt;BR /&gt;  /dev/rdsk/c3t2d0s2 (Bootable)&lt;BR /&gt;  /dev/rdsk/c2t1d0s2 (Bootable)&lt;BR /&gt;&lt;BR /&gt;I assume that earlier you had these 2 disks as a part of vg00 and that they had been mirrored.&lt;BR /&gt;&lt;BR /&gt;WARNING: Logical volume for Dump expected but not found.&lt;BR /&gt;you are perhaps missing &lt;BR /&gt;#lvlnboot -b /dev/vg01/lvol1&lt;BR /&gt;#lvlnboot -s /dev/vg01/lvol2&lt;BR /&gt;#lvlnboot -r /devvg01/lvol3&lt;BR /&gt;#lvlnboot -d /dev/vg01/lvol2&lt;BR /&gt;Try booting normally using the disk that is   /dev/rdsk/c3t2d0s2 &lt;BR /&gt;&lt;BR /&gt;and as u said that you are able to access the disk   /dev/rdsk/c2t1d0s2, do the following&lt;BR /&gt;&lt;BR /&gt;once the system comes up you can &lt;BR /&gt;#cd /etc/lvmconf&lt;BR /&gt;keep a copy all the LVM Cinfigurations there by appending to the name of exch conf file your name&lt;BR /&gt;eg&lt;BR /&gt;cp -p vg00.conf vg00.conf.mark&lt;BR /&gt;cp -p vg00.conf.old vg00.conf.old.mark&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;#vgexport /dev/vg01&lt;BR /&gt;#mkdir /dev/newroot&lt;BR /&gt;#mknod /dev/newroot/group c 64 0x010000&lt;BR /&gt;#vgimport /dev/vg01 /dev/dsk/c2t1d0s2&lt;BR /&gt;Reduce the other disk if agin you find as ghost disk in vg01.&lt;BR /&gt;#lvlnboot -b /dev/vg01/lvol1&lt;BR /&gt;#lvlnboot -s /dev/vg01/lvol2&lt;BR /&gt;#lvlnboot -r /dev/vg01/lvol3&lt;BR /&gt;#lvlnboot -v&lt;BR /&gt;#lvlnboot -R&lt;BR /&gt;#lvlnboot - v&lt;BR /&gt;#echo "boot vmunix -lq" &amp;gt;/tmp/AUTO&lt;BR /&gt;#efi_cp -d /dev/rdsk/c2t1d0s2 /tmp/AUTO/EFI&lt;BR /&gt;#setboot -p &lt;HW_PATH_OF_&gt;&lt;BR /&gt;#setboot -h &lt;HW_PATH_OF_&gt; -a &lt;HW_PATH_OF_&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;and set the Disk /dev/rdsk/c2t1d0s2&amp;gt; as thr Pry boot path.&lt;BR /&gt;&lt;BR /&gt;Start in LVM Maint mode&lt;BR /&gt;#shutdown -ry 00&lt;BR /&gt;&lt;BR /&gt;Once in LVM maint mode : (you should get the system boot in LVM Maint mode as lvlnboot has corrected the boot and swap for the disk)&lt;BR /&gt;&lt;BR /&gt;#vgexport /dev/vg00&lt;BR /&gt;#vgexport /dev/vg01&lt;BR /&gt;#mkdir /dev/vg00&lt;BR /&gt;#mknod /dev/vg00/group c 64 0x000000&lt;BR /&gt;#vgimport /dev/vg00 /dev/dsk/c2t1d0s2&lt;BR /&gt;&lt;BR /&gt;#lvlnboot -v&lt;BR /&gt;do this again for the boot,swap,root and dumo if reqd.&lt;BR /&gt;#lvlnboot -R&lt;BR /&gt;&lt;BR /&gt;try&lt;BR /&gt;&lt;BR /&gt;vgchange -a y vg00&lt;BR /&gt;#mount -a&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;ot reboot&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;regards&lt;BR /&gt;sujit&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;/HW_PATH_OF_&gt;&lt;/HW_PATH_OF_&gt;&lt;/HW_PATH_OF_&gt;</description>
      <pubDate>Fri, 06 Feb 2009 02:45:04 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/panic-all-vfs-mountroots-failed-need-drivers/m-p/4353195#M345037</guid>
      <dc:creator>sujit kumar singh</dc:creator>
      <dc:date>2009-02-06T02:45:04Z</dc:date>
    </item>
    <item>
      <title>Re: panic: all VFS_MOUNTROOTs failed: NEED DRIVERS ?????</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/panic-all-vfs-mountroots-failed-need-drivers/m-p/4353196#M345038</link>
      <description>&lt;!--!*#--&gt;No luck. The suggested actions seemed to help a little. The shutdown did not have the warning message and the boot seemed to be a bit closer to normal [see attached]. But it still gave the same panic.</description>
      <pubDate>Fri, 06 Feb 2009 23:34:04 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/panic-all-vfs-mountroots-failed-need-drivers/m-p/4353196#M345038</guid>
      <dc:creator>WWarren</dc:creator>
      <dc:date>2009-02-06T23:34:04Z</dc:date>
    </item>
  </channel>
</rss>

