Integrity Servers
1752413 Members
5701 Online
108788 Solutions
New Discussion юеВ

Re: problems while booting from san from replicated lun

 
Mynor Aguilar
Valued Contributor

problems while booting from san from replicated lun

Hello,
I had to EVA's, replication is set from EVA1 to EVA2 for 9 Luns (including boot volume).

Today I unpresent all luns from EVA1, failover BC to EVA2 and present all replicated luns from EVA2. no issues with my replicated LUNS so an exact copy should be on the other EVA.

at the beginning, I was having a lot of issues to see the luns, after some modifications on the boot parameters on EFI hba driver parameters i was able to see the presented LUNS.

obviously the primary and alternate boot paths were going to be different, so i went to EFI and did a map -fs and both fs0 and fs1 appeared as bootable. While trying to do a manually boot via efi on fs0: the server crashed at boot (see output on txt file).


basically, I have some questions.
do I have to do something else before trying to boot from my replicated lun?

any ideas why it crashes like that? if you take a look at the file there are some weird errors.

any help would be appreciated.

regards,
4 REPLIES 4
Kishore Anand
Occasional Advisor

Re: problems while booting from san from replicated lun

Try These: -
1. sasd: Please install the latest SerialSCSI-00 depot ASAP.

2. Try booting in LVM maintenance mode with boot -lq

3. Check Firmware on EVA2(it should support boot process)

4. Please mention server model and OS version as well.
Torsten.
Acclaimed Contributor

Re: problems while booting from san from replicated lun

Server is a rx2660.

Did you upgrade the SAS driver after you cloned the data?

Some driver versions modify the firmware too, IMHO this is the reason for the SAS related error message.


But the reason for the crash is IMHO different.

Is this 11.23?

The path to your disks is probably different, so the LVM config does not match.

You need to vgexport/import vg00 to adjust the configuration.

Hope this helps!
Regards
Torsten.

__________________________________________________
There are only 10 types of people in the world -
those who understand binary, and those who don't.

__________________________________________________
No support by private messages. Please ask the forum!

If you feel this was helpful please click the KUDOS! thumb below!   
Stefan Stechemesser
Honored Contributor

Re: problems while booting from san from replicated lun

Hello,

fs0: seems to be an internal SCSI disk:
fs0 : Acpi(HWP0002,PNP0A03,200)/Pci(1|0)/Scsi(Pun1,Lun0)/HD(Part3,Sig1CEBCCEE-0063-11DC-8004-D6217B60E588)


and fs1: is your EVA boot lun.

I think the internal SCSI disk was not the disk you used as Source, but as you wrote, you unpresented the old LUN.

The problem now is that the new lun has a new hardware path and that you use LVM.

After you successfully booted the kernel in single user mode, the LVM tries to activate VG00 to mount the root filesystem and this did not work, because it was searching for the OLD disk.
But on the path of the OLD disk, now another one seems to be which resulted in the failure:

WARNING: ROOT device 0x1f100102 is a non-LVM partition, disallowed on LVM disk.
WARNING: ROOT device 0x1f100102 has been deconfigured (set to 0xffffffff).


-----------------------------------------------------
| |
|SYSTEM HALTING during LVM Configuration |
| |
Non-lvm root on LVM disk


Solution: Boot in maintenance mode, export and reimport VG00 (with the correct root device file)

Mynor Aguilar
Valued Contributor

Re: problems while booting from san from replicated lun

Hello,
it is a RX2660 with HP-UX 11.23
Actually I haven't paid attention to the SAS driver because I have never use them, it's kinda weird that only one of the boot paths is available (FS1:) I was supposed to have multiple paths to the lun but it seems to be a misconfiguration on the Fabric. Thanks for your suggestion about maintenance mode, I was not really sure how to handle the different HW path issue but exporting and importing vg00 sounds like the solution. I'll keep you posted.

regards,