Operating System - HP-UX
1754014 Members
7538 Online
108811 Solutions
New Discussion юеВ

Re: Problems Restoring VG After Disk Replacement

 
SOLVED
Go to solution
Jeff Lanter
New Member

Problems Restoring VG After Disk Replacement

Hi,

I'm working on a very old HP server running HP-UX 9.x. The second of two hard disks failed (non-root, non-mirrored), so we've yanked it out and replaced it.

After installing the new drive, I took the following steps in an attempt to recreate all of the logical volumes and mount them, prior to restoring files to them from tape--this was in maintenance mode (hpux -lm). There is only one VG defined (vg00), and the replacement disk device file is c0d0s2 (same as the failed unit); the existing disk is c2d0s2.

1. newfs /dev/rdsk/c0d0s2 C2490A
2. vgcfgrestore -n /dev/vg00 /dev/rdsk/c0d0s2
3. rm /etc/lvmtab
4. vgimport -v /dev/vg00 /dev/rdsk/c0d0s2 /dev/rdsk/c2d0s2
5. vgscan -v
6. vgchange -a y /dev/vg00

At this point, the volume group is active. However, I can't mount any of the logical volumes that are on the new drive (lvol3 through lvol6) because fsck fails, complaining about TRASHED VALUES and BAD MAGIC NUMBER in the super blocks of the logical volumes. I've tried specifying alternate superblocks with the -b option, but then fsck complains about just about everything else... unknown file types, bad addresses, etc.

Now, I've noticed that as soon as I issue the vgcfgrestore command, fsck starts complaining about the super block. After the newfs, fsck is happy, but after vgcfgrestore, fsck is unhappy, hence I can't get anything to mount.

I'm stuck. Does anyone have any ideas? Am I just going about this the wrong way?
5 REPLIES 5
A. Clay Stephenson
Acclaimed Contributor

Re: Problems Restoring VG After Disk Replacement

9.x was the last release to support disk slices (well, technically, slices could be used on later releases but they had to have been created on 9.x or earlier). The vfcfgrestore applies to the entire disk (c0d0s0) not a slice but you can't mix disk slices with LVM.
If it ain't broke, I can fix that.
TwoProc
Honored Contributor

Re: Problems Restoring VG After Disk Replacement

I'm kind of confused - from one point of view it looks like you're not using lvm (from reading step 1 - you're creating a file system directly on the device). Which is fine. But, then you proceed to work with lvm via vg commands.

But, since your statement says you can't get lvols back, it looks like you're going for lvm.
So,
change step 1 to pvcreate instead of newfs.
Run steps 2-6 as is.
Then do a "pvdisplay -v /dev/rdsk/c0d0s2" off to a file, and get a list of all of the lvols that are on that drive (it should be lvol3 through lvol6 as you've mentioned that they are missing).
Now, just newfs those lvols that are in the drive and mount them.

Big problem though. Currently /dev/vg00/lvol3 is usually root in most systems, lvol4 is usually /tmp, lvol5 could possibly be /home, and lvol6 might be /opt (but maybe not in 9.x). Point is, I don't think you've got a root system - but I really can't remember the splits from 9.x (if they were different). If this is true - you're going to need to do a restore. If you don't have a restore tape (or a dd of the disk), this probably means install...

If you are totally hosed, there is one possible way out of this that I've used in the past. In many cases I found that what went out on the drives was not the media, but the logic card portion of the drive. So, back before I had mirroring - my first attempt was to take the new drive that HP brought out(if it was the exact same drive) and pull the logic board off and put it on the non-running drive. If it came up, I could a) just run with what I've got or b) "dd" the contents of the disk to tape, swap the logic board back to the new replacement drive, and "dd" the tape onto the new media.

If you're lucky, and it's not a media failure and you've got the same drive to replace with, the above would get you back up and running quickly.
We are the people our parents warned us about --Jimmy Buffett
A. Clay Stephenson
Acclaimed Contributor
Solution

Re: Problems Restoring VG After Disk Replacement

After checking some of my very old notes, s2 was generally the entire disk. The problem is that your newfs is being clobbered by the vgcfgrestore. Because s2 refers to the entire disk, newfs is creating a filesystem on the entire disk. You should do the vgcfgrestore first, then a vgchange -a y. Next do a vgdisplay -v and that should show you the LVOL's on the volume group. You do not do a vgimport because the vgcfgrestore should have taken care of that for you --- although you may have a very confused system now. After determing the lvols, you do the newfs on these.
If it ain't broke, I can fix that.
Jeff Lanter
New Member

Re: Problems Restoring VG After Disk Replacement

I think I solved my own problem... I didn't realize that I needed to create a filesystem _within_ each logical volume (duh). Once I made the volume group available, I needed to do a...

makefs /dev/vg00/rlvol3 C2490A

...on each of the logical volumes in turn. After that, I was able to fsck and mount each volume. At the moment, I am sucessfully pulling files back from tape to each of the logical volumes.

Thanks for the info!
Jeff Lanter
New Member

Re: Problems Restoring VG After Disk Replacement

See previous posting.