1752790 Members
6314 Online
108789 Solutions
New Discussion юеВ

11.31 LVM/SAN disk error

 
Paul F Rose
Advisor

11.31 LVM/SAN disk error

I'm getting the following LVM error on a VG connected to an EMC ClarIIon array:

# vgdisplay -v vg01
vgdisplay: Warning: couldn't query physical volume "/dev/dsk/c3t0d4":
The specified path does not correspond to physical volume attached to
this volume group
vgdisplay: Warning: couldn't query all of the physical volumes.
--- Volume groups ---
VG Name /dev/vg01
VG Write Access read/write
VG Status available
Max LV 255
Cur LV 3
Open LV 3
Max PV 16
Cur PV 1
Act PV 1
Max PE per PV 10000
VGDA 2
PE Size (Mbytes) 8
Total PE 3839
Alloc PE 3712
Free PE 127
Total PVG 0
Total Spare PVs 0
Total Spare PVs in use 0
VG Version 1.0
VG Max Size 1250g
VG Max Extents 160000

vgdisplay: Warning: couldn't query physical volume "/dev/dsk/c3t0d4":
The specified path does not correspond to physical volume attached to
this volume group
vgdisplay: Warning: couldn't query all of the physical volumes.
--- Logical volumes ---
LV Name /dev/vg01/lvoracle
LV Status available/syncd
LV Size (Mbytes) 18432
Current LE 2304
Allocated PE 2304
Used PV 1

vgdisplay: Warning: couldn't query physical volume "/dev/dsk/c3t0d4":
The specified path does not correspond to physical volume attached to
this volume group
vgdisplay: Warning: couldn't query all of the physical volumes.
LV Name /dev/vg01/lvstage
LV Status available/syncd
LV Size (Mbytes) 10240
Current LE 1280
Allocated PE 1280
Used PV 1

vgdisplay: Warning: couldn't query physical volume "/dev/dsk/c3t0d4":
The specified path does not correspond to physical volume attached to
this volume group
vgdisplay: Warning: couldn't query all of the physical volumes.
LV Name /dev/vg01/ora_home
LV Status available/syncd
LV Size (Mbytes) 1024
Current LE 128
Allocated PE 128
Used PV 1


--- Physical volumes ---
PV Name /dev/dsk/c5t0d4
vgdisplay: Warning: couldn't query physical volume "/dev/dsk/c3t0d4":
The specified path does not correspond to physical volume attached to
this volume group
vgdisplay: Warning: couldn't query all of the physical volumes.
PV Status available
Total PE 3839
Free PE 127
Autoswitch On
Proactive Polling On



-----------------------------------------------
c3t0d4 and c5t0d4 are the same LUN on the array through two different SAN zones. The following is some diagnostic stuff:
-----------------------------------------------

root@(hcidb2) [/home/x40894]
# strings /etc/lvmtab
/dev/vg00
/dev/disk/disk1_p2
/dev/vg01
/dev/dsk/c5t0d4
/dev/dsk/c3t0d4
/dev/vg02
/dev/dsk/c14t1d6

root@(hcidb2) [/home/x40894]
# xd -An -j8192 -N6 -tc /dev/rdsk/c3t0d4
L V M R E C

root@(hcidb2) [/home/x40894]
# xd -An -j8192 -N6 -tc /dev/rdsk/c5t0d4
L V M R E C

root@(hcidb2) [/home/x40894]
# xd -An -j8200 -N16 -tx /dev/rdsk/c3t0d4
e6346e95 48f534f7 e6346e95 48f534fc

root@(hcidb2) [/home/x40894]
# xd -An -j8200 -N16 -tx /dev/rdsk/c5t0d4
e6346e95 48f534f7 e6346e95 48f534fc


root@(hcidb2) [/home/x40894]
# dd if=/dev/dsk/c3t0d4 of=/dev/null bs=1024k count=8
8+0 records in
8+0 records out
root@(hcidb2) [/home/x40894]
# dd if=/dev/rdsk/c3t0d4 of=/dev/null bs=1024k count=8
8+0 records in
8+0 records out
root@(hcidb2) [/home/x40894]
# dd if=/dev/dsk/c5t0d4 of=/dev/null bs=1024k count=8
8+0 records in
8+0 records out
root@(hcidb2) [/home/x40894]
# dd if=/dev/rdsk/c5t0d4 of=/dev/null bs=1024k count=8
8+0 records in
8+0 records out
root@(hcidb2) [/home/x40894]
# ioscan -fnH 0/2/1/0/4/0.10.22.0.0.0.4
Class I H/W Path Driver S/W State H/W Type Description
==================================================================
disk 22 0/2/1/0/4/0.10.22.0.0.0.4 sdisk CLAIMED DEVICE DGC CX700WDR5
/dev/dsk/c3t0d4 /dev/rdsk/c3t0d4
root@(hcidb2) [/home/x40894]
# ioscan -kfnH 0/2/1/0/4/0.10.22.0.0.0.4
Class I H/W Path Driver S/W State H/W Type Description
==================================================================
disk 22 0/2/1/0/4/0.10.22.0.0.0.4 sdisk CLAIMED DEVICE DGC CX700WDR5
/dev/dsk/c3t0d4 /dev/rdsk/c3t0d4
root@(hcidb2) [/home/x40894]
# ioscan -P health -C tgtpath
Class I H/W Path health
==================================
tgtpath 3 0/2/1/0/4/0.0x500601601060295c online
tgtpath 4 0/2/1/0/4/0.0x500601691060295c online
tgtpath 0 0/4/1/0.0x6573b92e5e604a8 online
tgtpath 8 0/6/1/0/4/0.0x5006016544600644 online
tgtpath 7 0/6/1/0/4/0.0x5006016c44600644 online
tgtpath 2 0/7/1/0.0x1 online
tgtpath 1 0/7/1/1.0x2 online




-----------------------------------------------
So everything looks OK to me as far as HW and SAN mapping is concerned, and the LVM header info on both paths appears consistent.

Any thoughts?

BTW, this occurred after running a vgscan (after renaming /etc/lvmtab) in an attempt to clear up a problem with another VG where we had changed one of the FC cables over to a new SAN fabric but hadn't reduced the PVs from the VG prior to removing the old disk device files (so vgreduce would fail). That seemed to work except we somehow ran into a problem where 'Cur PV' and 'Act PV' were out of synch and we ended up rebuilding that VG from scratch.
--------------------------------------------------------------------------------------------------
P.S. This thread has been moved from System Administration to LVM and VxVM- Forum Moderator
8 REPLIES 8
Rita C Workman
Honored Contributor

Re: 11.31 LVM/SAN disk error

You might try to put that disk in the vg

vgextend vg

or since you are clearly out of sync

reboot the box.

Rita
Rita C Workman
Honored Contributor

Re: 11.31 LVM/SAN disk error

One more thought,

If you reboot and have the same problem, and since you recreated the volume group from scratch (hence you are going to restore any data afterwards), then....

vgreduce -f vg (to force killing the volume group and try to get the o/s to get back in sync)
reboot may be optional here
Then try to recreate the volume group.

Just a thought,
Rita
likid0
Honored Contributor

Re: 11.31 LVM/SAN disk error

If on 11.31 by the output of your vgdisplay, you could try with:

cp /etc/lvmtab /etc/lvmtab.old(not needed, just in case)
vgscan -f /dev/vg01

why aren't you using persistent dsf's ?

you could try and sync adding persistent devices:

vgscan -N -f /dev/vg01
Windows?, no thanks

Re: 11.31 LVM/SAN disk error

Looks like you are using EMC Clariion - there are specific settings and FW levels you need for this to work. A google search of:

clariion 11.31 site:itrc.hp.com

should throw up plenty of reading material to consider

HTH

Duncan

I am an HPE Employee
Accept or Kudo
Paul F Rose
Advisor

Re: 11.31 LVM/SAN disk error

Rita:

The vgextend failed with the same error.

Daniel:

The vgscan failed with the same message (and it was a vgscan that created the situation in the first place). We didn't set it up with persistent dsf's because our SW vendor didn't want to support it. We plan to convert it over now anyway. According to the man page, running vgscan with the -N option on an activated VG will just add legacy dsf's.

I just realized it's not clear from my post that the VG is active and in use through the c5t0d4 path.

We won't have an opportunity to bring the app down until Aug 12. If some magic bullet hasn't been found by then I plan to try:

1). Deactivate and reactive the VG to see if that clears it up.
2). Deactivate the VG and try converting it to persistent dsf's.
3). Reboot.

Duncan:

The Google search you recommended yielded over 500 hits. I haven't parsed through them all. We have six other 11.31 systems connected to CX arrays though and I've studied the EMC documentation. We also have the second VG on this system which is working OK with SAN LUNs. What I've seen in the past with 11.31 and CX arrays when the array isn't configured for ALUA is you can access the raw device but not the block device, so you can do a pvcreate but can't add the LUN to a VG. I've never seen this situation where everything looks OK with the lunpaths but LVM still balks.

But you're suggestion is a good one.
Deeos
Regular Advisor

Re: 11.31 LVM/SAN disk error

Hi Pual,

use the following:

please ensure vg01 should be active state
by using:
#vgdisplay -v vg01

#vgchange -a y vg01

then use

#vgextend /dev/vg01 /dev/dsk/c3t0d4

#vgdisplay -v vg01


check whether the disk is added or not

and if it showing still same error, then

#mv /etc/lvmtab /etc/lvmtab.old

#vgscan -v
#vgdisplay -v vg01


Regards
Deeos
Deepak
Michael Leu
Honored Contributor

Re: 11.31 LVM/SAN disk error

What does
scsimgr get_attr -a leg_mpath_enable
say?
Paul F Rose
Advisor

Re: 11.31 LVM/SAN disk error

Once we got a chance to shut down the DB I de-activated the VG and then re-activated it. This cleared up the problem.

Then I de-activated it again and converted it to use persistent devices.