System Administration
Showing results for 
Search instead for 
Did you mean: 

installing Oracle LInux results in "VolumeGroup00 not found"


installing Oracle LInux results in "VolumeGroup00 not found"

OK -- been fighting with my DL380g8 server for two days now trying to get Oracle Linux to install on to my DL380G8 that has an HP SmartArray P420i card installed.  I have 8 disks - 2 configured as RAID 1 for boot, the other 6 are not configured at this time. The installer sees the physical volume  (SAS Array A) as /dev/ciss/c0d0 and creates VolGroup00, then LogVol00 as / (ext3) and LogVol01 for swap. The installation completes successfully, but when the server tries to boot to the HDD, I get these errors - like it can't even see the Volume group it just created.


I see in my pvdisplay that the PV is actually being configured as "/dev/cciss!c0d0p2" and I think the "!" is throwing the bootloader off.


sh-3.2# pvscan
PV /dev/cciss!c0d0p2 VG VolGroup00 lvm2 [279.25 GB / 0 free]
Total: 1 [279.25 GB] / in use: 1 [279.25 GB] / in no VG: 0 [0 ]

sh-3.2# pvdisplay
--- Physical volume ---
PV Name /dev/cciss!c0d0p2
VG Name VolGroup00
PV Size 279.26 GB / not usable 9.71 MB
Allocatable yes (but full)
PE Size (KByte) 32768
Total PE 8936
Free PE 0
Allocated PE 8936


I can boot into rescue mode and the rescue sees my installation, and can mount it under /mnt/sysimage.  Is there something with this P420i card that causes this, or is there a certain process or set of drivers that needs to be used?  The Intelligent Provisioning refuses to cooperate - it won't recognize my ISO image via iLO.



P.S. This thread has been moved from ProLiant Servers (ML,DL,SL) to Linux > sysadmin - HP Forums Moderator

Honored Contributor

Re: installing Oracle LInux results in "VolumeGroup00 not found"

The thread subject say the error message is "VolumeGroup00 not found" while the rest of the post talks about "VolGroup00". A typing mistake?


The GRUB bootloader does not understand LVM at all. If you placed your /boot within LVM, you will certainly have problems. All the rest can be on LVM, but /boot must be on a regular partition. In my setups, /boot is typically /dev/cciss/c0d0p1. But after carefully reading between the lines of your post, I think your problem isn't here.


The bootloader's task is to load two files from /boot to memory: the kernel and the initrd/initramfs. Once that is done, the bootloader will be out of the picture and the kernel takes over.


The initrd/initramfs is also rather single-minded: its purpose is to get the real root filesystem mounted (and possibly to activate the swap area), anything else can be done later. Its only clue for the location of the root filesystem will be the root= bootparameter, given to the kernel by the bootloader... and here is a trick: LVM logical volume names can be specified in two ways: the full name of your LogVol00 can be specified as /dev/mapper/VolGroup00-LogVol00 or /dev/VolGroup00/LogVol00. When the system is running normally, both names can be used for pretty much any purpose... but at boot time, only the /dev/mapper/ style names are guaranteed to work.



So, press a key when the GRUB boot splash image is displayed to access the GRUB boot menu. Press "e" to edit the boot entry, select the "kernel" line and press "e" again to edit the line. The line may be longer than the width of the text-mode screen, so use the cursor keys to move along the line to check it. Make sure the root= parameter is typed correctly, and fix it if it contains errors. Assuming that Oracle Linux still closely matches RHEL, if you are using version 6, there may also be parameters like "rd_LVM_LV=VolGroup00/LogVol00": these should identify the root and swap LVs.


Remember that any edits you make in the GRUB boot configuration editor are not persistent: they will only take effect for the current boot only. If you find your system will boot fine after fixing a GRUB configuration error, you should edit /boot/grub/grub.conf to make the fix persistent.


Another possible trouble spot is the loading of the SAS array driver. The driver should have automatically been packaged into the initrd/initramfs file to be loaded as one of the earliest steps in the boot sequence. If you think the array driver is not loading, you should remove any "quiet" and "splash" or "rhgb" options from the kernel boot parameters, and pay careful attention to the boot messages before the "VG not found" error message. The loading of the SmartArray driver should produce several lines of text, as the kernel detects the controller version, the logical disks available on it, and the partitions on those disks. If the message scrolls off the screen, you can press Shift-PgUp to access a limited scroll-back buffer on the Linux text console.


Re: installing Oracle LInux results in "VolumeGroup00 not found"

Sorry for the inconsistency, the patch is actually /dev/VolGroup00/LogVol00.  LogVol01 is swap.  My bootloader is installed on /dev/cciss/c0d0 ... although I've tried it on the MBR and on the first sector of the boot partition (/dev/cciss/c0d0p1) with the same results each time.


I did as you suggested, and it appears that the system isn't even seeing my disk controller.  It sees the keyboard, mouse, QLogic fiber cards and the tape library that's attached to the server via the QLogic fiber cards, but there is absolutely nothing in the process that is picking up the SmartArray controller.  I've attached a copy of the output (it's messy but copied from my SSH session to the iLO interface).




Honored Contributor

Re: installing Oracle LInux results in "VolumeGroup00 not found"

OK... I think I've actually seen something quite similar to this, a few years ago...


Your bootloader is fine: it has succesfully loaded both the kernel and the initrd file, and its job is now done.


If I manually run "modprobe cciss" in a VM that has no actual SmartArray controller, the module will at least output its name and version number to the kernel message buffer (dmesg). So it appears that the SmartArray controller driver is not loaded at all.


There would be two main reasons why the module would not be loaded:

  • it might not be included in initrd at all, or
  • the initrd creation script has failed to create the proper commands to load it

Boot into rescue mode and use "chroot /mnt/sysimage /bin/bash" to make the filesystem appear as it is when running normally. Then check /etc/modprobe.conf. It should have an "alias scsi_hostadapter[number] <modulename>" line for each SCSI/FC/whatever storage controller type you have. And the controller that handles your system disk should be on the first, unnumbered "alias scsi_hostadapter" line.


Besides setting the aliases for the kernel module tools, these lines have a secondary function: all the listed scsi_hostadapter modules, and their options, will be included in initrd and loaded by the early-boot init script within initrd.


If you have any "options" lines for any of the scsi_hostadapter modules, there used to be a nasty pitfall here: if you had two "options" lines for the same module, the mkinitrd command would generate a syntax error in the early-boot init script, and the boot would fail. So if your QLogic qla2xxx driver needs some options, write all of them on a single "options qla2xxx ..." line, just in case.


So, your scsi_hostadapter module aliases and options should look about like this:

alias scsi_hostadapter cciss
alias scsi_hostadapter1 qla2xxx
alias scsi_hostadapter2 qla2xxx

options qla2xxx <whatever>


If either the order of the lines or the numbers on the second and subsequent "alias scsi_hostadapterN" lines is wrong, fix it. Then you'll have to re-create your initrd:

mv /boot/initrd-2.6.32-200.13.1.el5uek.img /boot/initrd-2.6.32-200.13.1.el5uek.broken
mkinitrd /boot/initrd-2.6.32-200.13.1.el5uek.img 2.6.32-200.13.1.el5uek


You can add option "-v" to the mkinitrd command to make it more verbose, if you want.


If the initrd creation is successful, type "exit" twice to exit the chroot and the rescue environment, and the system should automatically reboot. And if the fix was successful, the system should now boot on its own, without the rescue media.



If you are curious, you can extract the contents of an initrd file quite easily:


mkdir /tmp/broken-initrd
cd /tmp/broken-initrd
zcat /boot/initrd-2.6.32-200.13.1.el5uek.broken | cpio -iv

 This will unpack the broken initrd file to the current directory. /tmp/broken-initrd/init is the script that is generated by the mkinitrd command: it loads the essential drivers and performs any other vital steps for initially mounting the root filesystem in read-only mode. Once that is done, the real /sbin/init takes over from there.


Extract the good initrd to another directory in the same way, then run "diff -r /tmp/broken-initrd /tmp/good-initrd" to find differences between the directories. I think you'll find a few.