cancel
Showing results for 
Search instead for 
Did you mean: 

Lvm disk issue in root Vg

 
SOLVED
Go to solution
Sreer
Valued Contributor

Lvm disk issue in root Vg

Hi Gurus,

Iam facing a disk issue in my server.

root@nfrx1:.../root # ioscan -fnC disk
Class I H/W Path Driver S/W State H/W Type Description
======================================================================
disk 1 0/0/2/1.15.0 sdisk CLAIMED DEVICE HP 73.4GMAW3073NC
/dev/dsk/c3t15d0 /dev/rdsk/c3t15d0
root@nfrx1:.../root # uname -a
HP-UX nfrx1 B.11.11 U 9000/800 542760578 unlimited-user license
root@nfrx1:.../root #

root@nfrx1:.../root # strings /etc/lvmtab
/dev/vg00
/dev/dsk/c1t15d0
/dev/dsk/c3t15d0
root@nfrx1:.../root #


root@nfrx1:.../root # vgdisplay
vgdisplay: Warning: couldn't query physical volume "/dev/dsk/c1t15d0":
The specified path does not correspond to physical volume attached to
this volume group
vgdisplay: Warning: couldn't query all of the physical volumes.
--- Volume groups ---
VG Name /dev/vg00
VG Write Access read/write
VG Status available
Max LV 255
Cur LV 10
Open LV 10
Max PV 16
Cur PV 2
Act PV 1
Max PE per PV 4384
VGDA 2
PE Size (Mbytes) 16
Total PE 4374
Alloc PE 1350
Free PE 3024
Total PVG 0
Total Spare PVs 0
Total Spare PVs in use 0

root@nfrx1:.../root #


root@nfrx1:.../root # vgreduce -f vg00 | more
Not all extents are free. i.e. Out of 4374 PEs, only 3024 are free.
You must free all PEs using lvreduce/lvremove before the PV can be removed.
Example: lvreduce -A n -m 0 /dev/vg01/lvol1.
lvremove -A n /dev/vg01/lvol1
Here's the map of used PEs

--- Logical extents ---
LE LV PE Status 1
0000 lvol1 0000 stale
0001 lvol1 0001 stale
0002 lvol1 0002 stale
0003 lvol1 0003 stale
0004 lvol1 0004 stale
0005 lvol1 0005 stale
0006 lvol1 0006 stale
0007 lvol1 0007 stale
0008 lvol1 0008 stale
0009 lvol1 0009 stale
0010 lvol1 0010 stale



this is the root vg with 2 disks .

but now only 1 disk is showin in ioscan [pls see the given o/p]

while checking with pv key option:


root@nfrx1:.../root # lvdisplay -v -k /dev/vg00/lvol9 | more
lvdisplay: Warning: couldn't query physical volume "/dev/dsk/c1t15d0":
The specified path does not correspond to physical volume attached to
this volume group
lvdisplay: Warning: couldn't query all of the physical volumes.
--- Logical volumes ---
LV Name /dev/vg00/lvol9
VG Name /dev/vg00
LV Permission read/write
LV Status available/stale
Mirror copies 1
Consistency Recovery MWC
Schedule parallel
LV Size (Mbytes) 1024
Current LE 64
Allocated PE 128
Stripes 0
Stripe Size (Kbytes) 0
Bad block on
Allocation strict
IO Timeout (Seconds) default

--- Distribution of logical volume ---
PV Name LE on PV PE on PV
/dev/dsk/c3t15d0 64 64

--- Logical extents ---
LE PV1 PE1 Status 1 PV2 PE2 Status 2
00000 0 01186 stale 1 01186 current
00001 0 01187 stale 1 01187 current
00002 0 01188 stale 1 01188 current
00003 0 01189 stale 1 01189 current
00004 0 01190 stale 1 01190 current
00005 0 01191 stale 1 01191 current
00006 0 01192 stale 1 01192 current
00007 0 01193 stale 1 01193 current
00008 0 01194 stale 1 01194 current
00009 0 01195 stale 1 01195 current
00010 0 01196 stale 1 01196 current
00011 0 01197 stale 1 01197 current


So here can I reduce lv using PV key?

i never used pv keys...

in this case which pv key i need to reduce?

after reducing all lvs using pv key,

how I will reduce the disk from VG?


root@nfrx1:.../root # vgreduce /dev/vg00 /dev/dsk/c1t15d0
vgreduce: Couldn't query physical volume "/dev/dsk/c1t15d0":
The specified path does not correspond to physical volume attached to
this volume group
root@nfrx1:.../root #

I tried the vgscan preview mode also.


root@nfrx1:.../root # vgscan -v -p
vgscan: Warning: couldn't query physical volume "/dev/dsk/c1t15d0":
The specified path does not correspond to physical volume attached to
this volume group
vgscan: Warning: couldn't query all of the physical volumes.
vgscan: The physical volume "/dev/dsk/c3t15d0" is already recorded in the "/etc/lvmtab" file.

vgscan: has no correspoding valid raw device file under /dev/rdsk.
Verification of unique LVM disk id on each disk in the volume group
/dev/vg00 failed.

root@nfrx1:.../root #


pls help me.

Rgds
Sree



11 REPLIES
Vivek Bhatia
Trusted Contributor

Re: Lvm disk issue in root Vg

Hi Sreer,

Please run the below command to reduce the mirrors using PV key.

Format of the command
# lvreduce -m 0 -A n â k /dev/vgname/lvname key

Actual command for your senario

# lvreduce -m 0 -A n â k /dev/vgname/lvname 0

Thanks
Vivek Bhatia
sarfaraj ahmad
Trusted Contributor

Re: Lvm disk issue in root Vg

Hi Sreer,

As per output your c1t15d0 disk is missing and there is no root disk mirroring exist. currently your system is running on single disk.

please check for hardware related event in
/var/opt/resmon/log/event.log

also check physically is there any amber on the disk or Server?

also check below thread for your reference.
http://h30499.www3.hp.com/t5/System-Administration/Primary-root-disk-replace-without-reboot-HP-UX-11-11/m-p/3873809#M278864

sarfaraj ahmad
Trusted Contributor

Re: Lvm disk issue in root Vg

you can also check vgscan command to recreate lvmtab file.

please check man page for detail.
Torsten.
Acclaimed Contributor

Re: Lvm disk issue in root Vg

The c1... disk is dead - replace it.


The path is written on the chassis (rp24xx/A-class server).



No need to reduce anything, see

When_Good_Disks_Go_Bad_WP
http://bizsupport2.austin.hp.com/bc/docs/support/SupportManual/c01911837/c01911837.pdf

Hope this helps!
Regards
Torsten.

__________________________________________________
There are only 10 types of people in the world -
those who understand binary, and those who don't.

__________________________________________________
No support by private messages. Please ask the forum!

If you feel this was helpful please click the KUDOS! thumb below!   
P Arumugavel
Respected Contributor

Re: Lvm disk issue in root Vg

hi,

From your output of LV mirror status, the pysical volume could be identified which has key 0 is failed
>> --- Logical extents ---
LE PV1 PE1 Status 1 PV2 PE2 Status 2
00000 0 01186 stale 1 01186 current
00001 0 01187 stale 1 01187 current
00002 0 01188 stale 1 01188 current
00003 0 01189 stale 1 01189 current
00004 0 01190 stale 1 01190 current
00005 0 01191 stale 1 01191 current
00006 0 01192 stale 1 01192 current
00007 0 01193 stale 1 01193 current
00008 0 01194 stale 1 01194 current
00009 0 01195 stale 1 01195 current
00010 0 01196 stale 1 01196 current
00011 0 01197 stale 1 01197 current

First reduce all the lv mirrors from the PV uing the key as follows:
#lvreduce -m 0 -k /dev/vg00/lvnname 0

Then remove the PV. The physical volume key of a disk indicates its order in the volume group. The first physical volume Thhas the key 0, the second has the key 1, and so on. This need not be the order of appearance in /etc/lvmtab file although it is usually like that, at least when a volume group is initially created.

So as per your /etc/lvmtab file

>root@nfrx1:.../root # strings /etc/lvmtab
/dev/vg00
/dev/dsk/c1t15d0
/dev/dsk/c3t15d0

The disk /dev/dsk/c1t15d0 has key 0.
Reduce it from VG....

Rgds....
Torsten.
Acclaimed Contributor

Re: Lvm disk issue in root Vg

As said, if your system is patched to support LVM deactivation (pvchange -a n ...) there is NO need to reduce anything prior to the disk replacement - see the document!

Hope this helps!
Regards
Torsten.

__________________________________________________
There are only 10 types of people in the world -
those who understand binary, and those who don't.

__________________________________________________
No support by private messages. Please ask the forum!

If you feel this was helpful please click the KUDOS! thumb below!   
Vivek Bhatia
Trusted Contributor
Solution

Re: Lvm disk issue in root Vg

Hi,

Please find the answers for your questions below.

Q. So here can I reduce lv using PV key?
**Yes you can
**lvreduce -m 0 -A n -k /dev/vg02/lvol1 0 (0 is the PV key number in your case)

Q. in this case which pv key i need to reduce?
0 is PV key in your case
after reducing all lvs using pv key,

Q.How I will reduce the disk from VG?
vgreduce vg00 /dev/dsk/c1t15d0

If the disk is unavailable, the vgreduce command fails.You can still forcibly reduce it, but you must then rebuild the lvmtab.

# vgreduce -f vgname
# mv /etc/lvmtab /etc/lvmtab.save
# vgscan â v

This completes the procedure for removing the disk from your LVM configuration. If the disk hardware allows it, you can remove it physically from the system.

The five steps are:
1. Temporarily halt LVM attempts to access the disk.
# pvchange -a N c1t15d0

2. Physically replace the faulty disk.
3. Configure LVM information on the disk.
# vgcfgrestore â n vg00 c1t15d0 (If the vgreduce(without -f) was not successful).

Use these commands of vgreduce was successful.
#vgextend vg00 /dev/dsk/c1t15d0
#lvextend -m 1 (Mirror All the Logical Volumes.)

4. Re-enable LVM access to the disk.
# pvchange -a y c1t15d0

5. Restore any lost data onto the disk.
# vgchange -a y c1t15d0

Thanks
Vivek Bhatia
Manix
Honored Contributor

Re: Lvm disk issue in root Vg

hep ! Vivek is right

the command "*lvreduce -m 0 -A n -k /dev/vg02/lvol1 0 (0 is the PV key number in your case) "
is the break through in this situation.

Thanks
Manix
HP-UX been always lovable - Mani Kalra
Jose Mosquera
Honored Contributor

Re: Lvm disk issue in root Vg

Looks like a Ghost Disk or Phanton Disk. You can get a ghost disk if it have failed before VG activation, possibly because the system was reboot after the failure. A Ghost Disk is ussually indicated by vgdisplay command reporting more current physical volumes than actives one. Looks your vgdisplay output:
Cur PV 2 (PVs belonging to vg00)
Act PV 1 (Pvs recorded in Kernel)

In these situations where the disk was not available at boot time, or the disk has failed before VG activation (pvdisplay failed), the lvreduce command fails with an error that it could not query the physical volume. You can still remove the mirror copy, but you must specify the physical volume key rather than the name.

The physical volume key of a disk indicates its order in the volume group. The first physical volume have the key 0 (zero), the second has the key 1, and so on. This need not be the order of appareance in /etc/lvmtab file although it is usually like that, at least when a volume group is initially created. You can use the physical volume key to address a physical volume that is not attached to the volume group. You can obtain the key using lvdisplay wiyh the -k option as follow:

# lvdisplay -v -k /dev/vg00/lvol1
...
--- Logical extents ---
LE PV1 PE1 Status 1 PV2 PE2 Status 2
00000 0 01186 stale 1 01186 current
00001 0 01187 stale 1 01187 current
00002 0 01188 stale 1 01188 current
...

Compare this output with the output of lvdisplay without -k, which you used to check the mirror status. The column that contained the failing disk now holds the key. In your case is 0. Then use this key with lvreduce command:
1.- If you have a single mirror copy:
#lvreduce -m 0 -A n -k /dev/vgname/lvname key

2.- If you have two mirror copies:
#lvreduce -m 1 -A n -k /dev/vgname/lvname key

Rgds.
Torsten.
Acclaimed Contributor

Re: Lvm disk issue in root Vg

Why making this so complicated?



Just replace the disk by following the steps mentioned in the when_good_disks_... document.

Hope this helps!
Regards
Torsten.

__________________________________________________
There are only 10 types of people in the world -
those who understand binary, and those who don't.

__________________________________________________
No support by private messages. Please ask the forum!

If you feel this was helpful please click the KUDOS! thumb below!   
Sreer
Valued Contributor

Re: Lvm disk issue in root Vg

Hi All,
Thanks all of you for your excellent help !

Since the OLR patches not installed in this 11.11 box.

I done all manually.

reduced the lvs using the failed pv-key.

vgreduce -f vg00

rm /etc/lvmtab
vgscan -v

Then done the mirroring with the replaced disk.

So the key facts were

lvreduce using Pv key &

vgscan

after these two steps I was able to solve the problem.

rgds

Sree