Operating System - HP-UX
1748179 Members
4131 Online
108758 Solutions
New Discussion юеВ

Re: General question about VGDA structure on disk

 
M. Bianchy
Occasional Advisor

General question about VGDA structure on disk

Hi guys,

a while ago I  was asked to help in this scenario:

An older rp4440 machine with two mirrored system disks 11iv1 with old patch levels (around 2007). One disk failed with a bearing failure months ago, I guess the LVM driver set the extents on this disk all to "stale" and continued with the remaining mirror copy. Recently the second disk started to have I/O errors because of a head crash and eventually failed too. The last good ignite backup is from 2007 (no question this is self made problem: no recent backup, no device monitoring), newer backups are unreadable because it's made on 12 years old worn out DDS tapes. We managed to get the 1st failed HDD back online and tried to recover from that disk (with LVM maintenance mode, restored the most recent vg00.conf back to disk; we have an actual copy of /etc, this was recoverable from the 2nd failed disk). With the 1st disk I can do a vgchange -a y -q n -s vg00 without problems. If I skip the "-q n -s" paramters the system complains about the missing quorum. So far so good.

Now for my own curiosity: The LVM driver believes this disk is the failed one and all extents are marked as "stale" altough I can access the whole disk with dd (so no I/O errors). I believe it could be possible to recover data if it would be possible to manually set these extents to "current" but there is no way or tool to do so.

I was thinking about how the LVM driver would do things internally and I'm convinced that there is a lookup table which logical extent points to which physical extent(s) and what the state (stale/current) is. I tried to figure things out with a hex editor but without closer description that is not successfull. 2nd thought was to just block copy the lvols to a new created vg but it would require knowledge which extent belongs to which lvol.

Wouldn't it be possible to just set these extents to current? Is there a closer description about the on disk structures? My old LVM training material is not going that deep and all the links I found (also in this forum) are dead....

Ideas much appreciated

5 REPLIES 5

Re: General question about VGDA structure on disk

If you can get the VG active and mount the filesystems, why not just take an Ignite backup at this point and then you can just re-install if all other options fail.?

As to your actual question - the extents are marked as stale because LVM thinks there is another disk with newer copies of the extents on. So why not just try vgreduce -f on that disk that has really failed? That should remove LVM's knowledge of the newer copies - then the extents on the remaining disk will presumably be current? That's just guesswork on my part though as its a loong time since I had to do anything like this in anger. So you might want to wait for a second opinion, and definitely follow my advice above about getting a known good backup.


I am an HPE Employee
Accept or Kudo
MK_J
HPE Pro

Re: General question about VGDA structure on disk

I appreciate suggestion from Duncan and it seems to be the correct way to proceed.
As one disk is good, you can break the mirror and do vgreduce to remove the disk from lvmtab.
Next take a backup of the good disk, replace the faulty disk and run vgcfgrestore and vgsync.

Rrefer Disk Replacement Scenarios in the document ""When Good Disks Go Bad: Dealing with Disk Failures Under LVM"" :

https://support.hpe.com/hpsc/doc/public/display?docId=emr_na-c01911837

In case, you replace both the disks, you would need to restore data from backup.


I work for HPE

Accept or Kudo

M. Bianchy
Occasional Advisor

Re: General question about VGDA structure on disk

Hi Duncan, although I can activate the volume group all extents are still stale so I can not mount the filesystems - nor even dump the lvol with dd to another system (dd from the lvol gives an I/O error because all extents are stale). The basic question is: How can I manually set the extents to "current" so I can mount/fsck the FS? Manually could be some tool as well as - worst case - a hex editor where I could modify the lookup table I already tried "vgreduce -f" but that failed because the 2nd disk is not available anymore (Can't remember the exact error message but even -f didn't do the trick) A friend of mine told me that the AIX LVM could do something like that - but the functions differ a lot between HP/UX and AIX (and Linux btw)

Re: General question about VGDA structure on disk

OK - more attempts to pull this procedure from my long term storage memory!

IIRC you can't vgreduce while there are extents on a disk...

So before you can vgreduce you need to lvreduce...

Start by taking a vgcfgbackup so you can get back to this point...

Then first identify which PVKEY is the failed one, and which is the stale one using lvdisplay -v -k <lv name> for each LV

Then use lvreduce -k <PVKEY> -m 0 <lv name> where PVKEY is the PVKEY of the failed disk identified above (in your case this is presumably the disk which *doesn't* have stale extents on it)

When you have done this for all LVs in vg00 you should then be able to get the vgreduce to work

** I don't warrant this will work, as its a LONG time since I did this! **

 


I am an HPE Employee
Accept or Kudo
M. Bianchy
Occasional Advisor

Re: General question about VGDA structure on disk

Well, this sounds way better than any other idea so far - I will give it a try. Maybe it won't work because as "seen" from the LVM I deal with the disk which failed a long time ago and has been resurrected. So I think the scenario is more like: - 1st disk fails completely (all extents going "stale") - much later the 2nd disk is also beginning to fail, extent for extent until the system stops - eventually failing completely - 1st disk has been resurrected (with the extents still "stale") but won't allow Too bad I don't even have a backup of the vg00.conf from before the 1st disk failed - I haven't seen the -k parameter in lvreduce maybe because we just used the device files (which are hard-wired to HW paths in this machine) I will set up a test environment to simulate the behavior - I can't imagine there is no was to recover the data from a 100% readable disk where the data recovery company told us just a few blocks were totally unreadable