Operating System - HP-UX
1834009 Members
2239 Online
110063 Solutions
New Discussion

configure out a bad disk in a volume group

 
ALH
Occasional Advisor

configure out a bad disk in a volume group

Can anyone help me with the following:

I have the next errormessage:
vgdisplay: Warning: couldn't query physical volume "/dev/dsk/c2t15d0":
The specified path does not correspond to physical volume attached to
this volume group
vgdisplay: Warning: couldn't query all of the physical volumes.

I now that this disk has a defect. I am now searching for a way to configure out this disk from the volumegroup. i have tried a reducevg, but get then the same errormessage. On the disk lays one logical volume.

Please help me

Cees Wielink

6 REPLIES 6
John Palmer
Honored Contributor

Re: configure out a bad disk in a volume group

Hi,

Your course of action should depend on whether or not you intend to replace the disk. If so then you do the following:-

1. replace the defective disk
2. vgcfgrestore -n /dev/rdsk/c?t?d?
3. Make a new filesystem on your single volume and recover it from a backup.

If you simply want to remove the volume and disk then:-
1. remove the failed disk.
2. lvremove -A n -f /dev/vg??/lvol?
3. vgreduce -A n -f
4. vgcfgbackup

Regards,
John
Ajitkumar Rane
Trusted Contributor

Re: configure out a bad disk in a volume group

You can just do as what John has said which is a real clean method, alternately you can also try making backup copy of /etc/lvmtab and executing vgscan to rebuild a new lvmtab.And check for the faulty disk dev file is not present in the lvmtab.And just compare theit with the backup lvmtab file if rest of the information contained is correct as it was before.
Amidsts difficulties lie opportunities
Bill McNAMARA_1
Honored Contributor

Re: configure out a bad disk in a volume group

NO, don't mv /etc/lvmtab and vgscan,
because of lots of reasons but most important,
if you have load balanced your traffic along
all available paths the vgscan will recreate
the lvmtab differently, resulting in
perf degredation. Your main access paths
will be based on numeric ordering of the disk
device files.

So, try a
vgreduce -f vgname
according to the man pages it should fix your
problem...
only do that if the standard way
vgreduce /dev/vgname /dev/dsk/cXtYdZ
doesn't work!

Also, as a last resort, vgexport the vg
reimport it,
ls /dev/*/group
chose unique maj number
mkdir /dev/vgname
mknod /dev/vgname/group c 64 0x0Z0000
vgimport /dev/vgname /dev/dsk/cXtYdZ ....
Now, that won't work if the kernel is out of
sync with the data on the disk, you may have
to boot in lvm maintanence mode, hpux -lq
or use the vgcfgrestore to rewrite a correct
header on the lvm disk.

I'm thinking that sometimes this problem is
caused by vgscan.. ie
do a dd from an lvm disk to another free disk
vgscan will report the two belonging to the
vg, with PV's VGRA not matching static files.

Good luck.
Bill
It works for me (tm)
Bill McNAMARA_1
Honored Contributor

Re: configure out a bad disk in a volume group

Johns answer will fix it, you can also use
a pvmove to move a lv from one PV to another.
assuming the pv was okay... but
Sorry, never read the full question.
Hope you got a good backup.....

Bill
It works for me (tm)
DLH
Occasional Advisor

Re: configure out a bad disk in a volume group

The volume group should be made unavailable before executing vgcfgrestore.

vgchange -a n /dev/vg???
vgcfgrestore -n /dev/vg??? /dev/rdsk/c?t?d?
vgchange -a y /dev/vg???
David Hixson
Advisor

Re: configure out a bad disk in a volume group

Just to make sure our answers are confusing enough, there are a few other things to take into account.

The first is that you don't want to put a new disk in the same slot until you have fully remove the one that has failed.

Secondly, you have to make sure that broken disk has been removed from any lvols. Since it isn't working propperly, the normal lvreduce commands will not work. Instead you can use 'lvdisplay -v -k /dev/whatever/lvol1' to determine the PV number of the broken drive. Then 'lvreduce [-m 0] -k /dev/vgwhatever/lvol1' and depending on the OS version and patch level, the PV number goes after the -k option or at the end of the command.

Then you can try the vgreduce -f. If the vgreduce fails, pull the broken drive, remove lvmtab, and vgscan then try again (it should work). After the vgreduce -f you have to rebuild lvmtab again and then you can put in a new disk, pvcreate it and you are good to go.

Be aware that all of these tools and command options (lvreduce -k and vgreduce -f) are subject to frequent change based on patches and their functionality will vary.

The prior answer that this will mess up any intentional load balancing is true, however I do not know of a way to work around that for a truely broken disk.
LVM is a powerful tool in the hands of the devious.