Operating System - HP-UX
1834594 Members
4022 Online
110069 Solutions
New Discussion

Move vPar to new H/W path

 
Keith Clark
Valued Contributor

Move vPar to new H/W path

Hello All,

I am creating vPars on an N-Class machine booting from a SAN connected XP512. I want to be able to create (with O/S installed) a vPar on one machine using primary boot H/W path of 1/8/0/0.blah and a altboot path of 0/10/0/0.blah. I would then like to be able to point a different machine at the same LDEV's on H/W paths 0/12/0/0(primary) and 1/4/0/0(alt), shutdown the first vPar, and boot up the new one.

The first problem that I have encountered is with LVM. When I attempt to boot on the new machine, the controller numbers are have all changed, ioinit fails and the vPar panics. Has anyone else succesfully done this? What steps do I need to follow?

Thank you in advance,

Keith
5 REPLIES 5
Eugeny Brychkov
Honored Contributor

Re: Move vPar to new H/W path

Keith,
I did not heare about anyone sharing boot devices. Usually boot devices are internal disks, but external storage keeping user data is shared. In addition, you'll not be able to boot from the same volume multiple servers/partitions because they will request r/w access to volumes and data corruption may occur and server will panic.
I do not think you'll achieve what do you want
Eugeny
Keith Clark
Valued Contributor

Re: Move vPar to new H/W path

I guess I was not very clear. I am not trying to have multiple servers access the same disks simultaneously. I just want to be able to define a vPar and install the O/S on one server, but if that server has a H/W issue I want to be able to move that vPar to a different host (same class and type of server with the same vPar Config with the exception of I/O H/W path's) and boot it. Sort of a poor man's manual MC/SG or DR.

Does that make more sense?
Eugeny Brychkov
Honored Contributor

Re: Move vPar to new H/W path

Of course. Machines should be identical from every point of view including I/O subsystem. Then they will boot w/o problems. Any difference may influence reliability and system may crash.
That's my point of view
Eugeny
Keith Clark
Valued Contributor

Re: Move vPar to new H/W path

Maybe I should approach this from a different angle, since I think the vPar portion is confusing the issue.

Let's say you booted off of a scsi disk at H/W path 1/8/0/0 and the card in 1/8/0/0 died. What would you need to do in order to boot off of the spare scsi card that you had in 0/12/0/0?
Douglas D. Denney
Frequent Advisor

Re: Move vPar to new H/W path

I haven't done this with vpar software, but I have moved root volumes between servers. You need to do vgexport/vgimport operations from LVM maintenance mode in order to get it to work.

Note: this worked for me, you may need to tweak it to get it to work in your situation.

on "System A" do the following:

1. Create a mapfile of the vg00 filesystems:

# vgexport -p -v -m /mapfile /dev/vg00

This will create a plain text file, called /mapfile, that will look like the
following:
1 lvol1
2 lvol2
3 lvol3
4 lvol4
5 lvol5
6 lvol6
7 lvol7
8 lvol8
9 lvol9

2. FTP the mapfile to some other system to have available when the disk is moved
3. Shut the system down and boot it in LVM maintenance mode:

# shutdown -h -y now
boot_admin> boot pri isl
ISL> hpux -lm

4. Once in LVM maintenance mode, do the following:

# vgexport -v /dev/vg00

5. Shut down "System A"

From "System B", do the following:

6. Move disks to new server (physically or by zoning)

7. From the ISL prompt, enter the following:

# hpux -lm

8. Create device files for new disks:

# insf

9. Next, look for the device files associated with the new disk:

# ioscan -fnH

for example,

# ioscan -fnH 56/52.4

10. Move the old lvmtab file out of the way so you can create a new one:

# mv /etc/lvmtab /etc/lvmtab.old
# vgscan

11. Import the new physical volume name:

# vgimport -v -m /mapfile

for example,

# vgimport -v -m /mapfile /dev/vg00 /dev/dsk/c3t4d0

12. Activate the vg00 volume group:

# vgchange -a y /dev/vg00

13. Prepare logical volumes to become root, swap, dump, etc. at next boot:

# lvlnboot -R

14. Verify that the above command worked:

# lvlnboot -v

15. Clean out the mnttab file:

# rm /etc/mnttab
# touch /etc/mnttab

This file will be recreated when the system next boots. However, I found that if it is allowed to hang around after booting into LVM
maintenance mode, you end up with a root volume called /dev/root, which doesn't seem to harm anything, it just looks strange.

16. Reboot and hope for the best.

Hope this works.