Operating System - HP-UX
1756003 Members
3194 Online
108839 Solutions
New Discussion юеВ

Re: make_net_recovery/make_recovery issues

 
Kurt Renner
Frequent Advisor

make_net_recovery/make_recovery issues

We have been preparing for a hot-site disaster recovery test which is coming up in January, and I have run into a few issues with make_net_recovery/make_recovery.

We try to ensure that the hotsite vendor has the appropriate hardware at the disaster recovery site before we arrive, but in 2 previous tests, the hardware differs slightly from the original, so we have to make adjustments to account for differences in hardware paths, scsi IDs, disk sizes, etc.

When the boot disk needs to be changed, ignite seems to get confused on how to properly calculate the appropriate logical volume sizes based upon the numbers of extents in the disk(s) selected to contain vg00. I have always been able to get around it by resetting the configuration to that of the recovery image and selecting the disks in a slightly different order, but it always take several attempts before I am able to have ignite recognize the fact that there ARE enough free extents on the selected disks to contain vg00. There seems to be little rhyme or reason to it. What I suspect is that ignite is seeing extents from a logical volume on more than 1 disk. The reason this is so is because we make use of alternate links on most all of our systems. None employ LVM mirroring since our systems are all on EMC disk arrays (boot and data volume groups).

The other issue I am seeing currently (not normally seen under other circumstances) with Ignite is the fact that when the system is done laying down the image to disk, it fails to create the device files for most disks using the sdisk driver. In fact, only 1 was created with the last test install from a make_net_recovery image, and it is an EMC disk director device of 7MB, which is unusable to HP-UX. In this particular case, I have Fibre-channel attached EMC disks on a K460 machine. The FC card is a A6685A, and the firmware level on the K460 cpu is 41.33 which is the latest and greatest firmware for a K460. During the recovery process, there were 2 instances when the special files were created for all disks via the "insf -e" command. What happened to them, I do not know. I was unable to find any information in the recovery log. I am also trying to get a K570 to boot off FC using the same image, but have been completely unsuccessful with this hardware platform. It also has the latest firmware (41.34). If I install the device files via "insf -e", everything appears to be normal. I just want to know if there is a bug with the recovery process, or if the fact that it is fibre-channel attached is causing the issue.

Any help with these issues would be appreciated!
Do it right the first time and you will be ahead in the long run.
4 REPLIES 4
Sanjay_6
Honored Contributor

Re: make_net_recovery/make_recovery issues

Hi Kurt,

check out the ignite UX FAQ,

http://www.software.hp.com/products/IUX/iux_faq

Hope this helps.

Regds
harry d brown jr
Honored Contributor

Re: make_net_recovery/make_recovery issues

When you say "hot-site", you really mean "cold-site", correct? The reason I ask, is because you said you had EMC disks at your current production site. Are the EMC disks being SRDF'd to the remote site, or will you also have to recover the data at the recovery site?

live free or die
harry
Live Free or Die
Kurt Renner
Frequent Advisor

Re: make_net_recovery/make_recovery issues

Well I guess using your description it is a cold site. We refer to it internally as a hot site meaning there is hardware there and ready for action when we get there, but no data to work from other than that we take with us on tape. There is no SRDF involved. All tape restores are via Ignite recovery images, and the rest via Legato Networker.
Do it right the first time and you will be ahead in the long run.
Kurt Renner
Frequent Advisor

Re: make_net_recovery/make_recovery issues

I have finally figured out what causes the conflict in the Ignite-UX disk space calculations for logical volume sizes. There is a blurb in the FAQ about making sure that LVM is selected, and separate volumes for /, /var, /usr, etc. While this was not the cause of my problem, it got me thinking about those filesystems, and the reason space may be restricted. I then went to the "Additional Tasks" -> "Volume Parameters..", and checked the disk assignment for each logical volume. What I found, is that most of the logical volumes were restricted to residing on a particular physical disk in vg00 (I had 2 assigned). I changed the assignment to "Any" for those that do not have to be on the boot disk, and my problems with logical volume sizes disappeared.
Thanks to those that responded.
Do it right the first time and you will be ahead in the long run.