- Community Home
- >
- Servers and Operating Systems
- >
- Operating Systems
- >
- Operating System - HP-UX
- >
- 11.31 LVM/SAN disk error
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Discussions
Discussions
Forums
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО08-05-2010 09:57 AM
тАО08-05-2010 09:57 AM
11.31 LVM/SAN disk error
# vgdisplay -v vg01
vgdisplay: Warning: couldn't query physical volume "/dev/dsk/c3t0d4":
The specified path does not correspond to physical volume attached to
this volume group
vgdisplay: Warning: couldn't query all of the physical volumes.
--- Volume groups ---
VG Name /dev/vg01
VG Write Access read/write
VG Status available
Max LV 255
Cur LV 3
Open LV 3
Max PV 16
Cur PV 1
Act PV 1
Max PE per PV 10000
VGDA 2
PE Size (Mbytes) 8
Total PE 3839
Alloc PE 3712
Free PE 127
Total PVG 0
Total Spare PVs 0
Total Spare PVs in use 0
VG Version 1.0
VG Max Size 1250g
VG Max Extents 160000
vgdisplay: Warning: couldn't query physical volume "/dev/dsk/c3t0d4":
The specified path does not correspond to physical volume attached to
this volume group
vgdisplay: Warning: couldn't query all of the physical volumes.
--- Logical volumes ---
LV Name /dev/vg01/lvoracle
LV Status available/syncd
LV Size (Mbytes) 18432
Current LE 2304
Allocated PE 2304
Used PV 1
vgdisplay: Warning: couldn't query physical volume "/dev/dsk/c3t0d4":
The specified path does not correspond to physical volume attached to
this volume group
vgdisplay: Warning: couldn't query all of the physical volumes.
LV Name /dev/vg01/lvstage
LV Status available/syncd
LV Size (Mbytes) 10240
Current LE 1280
Allocated PE 1280
Used PV 1
vgdisplay: Warning: couldn't query physical volume "/dev/dsk/c3t0d4":
The specified path does not correspond to physical volume attached to
this volume group
vgdisplay: Warning: couldn't query all of the physical volumes.
LV Name /dev/vg01/ora_home
LV Status available/syncd
LV Size (Mbytes) 1024
Current LE 128
Allocated PE 128
Used PV 1
--- Physical volumes ---
PV Name /dev/dsk/c5t0d4
vgdisplay: Warning: couldn't query physical volume "/dev/dsk/c3t0d4":
The specified path does not correspond to physical volume attached to
this volume group
vgdisplay: Warning: couldn't query all of the physical volumes.
PV Status available
Total PE 3839
Free PE 127
Autoswitch On
Proactive Polling On
-----------------------------------------------
c3t0d4 and c5t0d4 are the same LUN on the array through two different SAN zones. The following is some diagnostic stuff:
-----------------------------------------------
root@(hcidb2) [/home/x40894]
# strings /etc/lvmtab
/dev/vg00
/dev/disk/disk1_p2
/dev/vg01
/dev/dsk/c5t0d4
/dev/dsk/c3t0d4
/dev/vg02
/dev/dsk/c14t1d6
root@(hcidb2) [/home/x40894]
# xd -An -j8192 -N6 -tc /dev/rdsk/c3t0d4
L V M R E C
root@(hcidb2) [/home/x40894]
# xd -An -j8192 -N6 -tc /dev/rdsk/c5t0d4
L V M R E C
root@(hcidb2) [/home/x40894]
# xd -An -j8200 -N16 -tx /dev/rdsk/c3t0d4
e6346e95 48f534f7 e6346e95 48f534fc
root@(hcidb2) [/home/x40894]
# xd -An -j8200 -N16 -tx /dev/rdsk/c5t0d4
e6346e95 48f534f7 e6346e95 48f534fc
root@(hcidb2) [/home/x40894]
# dd if=/dev/dsk/c3t0d4 of=/dev/null bs=1024k count=8
8+0 records in
8+0 records out
root@(hcidb2) [/home/x40894]
# dd if=/dev/rdsk/c3t0d4 of=/dev/null bs=1024k count=8
8+0 records in
8+0 records out
root@(hcidb2) [/home/x40894]
# dd if=/dev/dsk/c5t0d4 of=/dev/null bs=1024k count=8
8+0 records in
8+0 records out
root@(hcidb2) [/home/x40894]
# dd if=/dev/rdsk/c5t0d4 of=/dev/null bs=1024k count=8
8+0 records in
8+0 records out
root@(hcidb2) [/home/x40894]
# ioscan -fnH 0/2/1/0/4/0.10.22.0.0.0.4
Class I H/W Path Driver S/W State H/W Type Description
==================================================================
disk 22 0/2/1/0/4/0.10.22.0.0.0.4 sdisk CLAIMED DEVICE DGC CX700WDR5
/dev/dsk/c3t0d4 /dev/rdsk/c3t0d4
root@(hcidb2) [/home/x40894]
# ioscan -kfnH 0/2/1/0/4/0.10.22.0.0.0.4
Class I H/W Path Driver S/W State H/W Type Description
==================================================================
disk 22 0/2/1/0/4/0.10.22.0.0.0.4 sdisk CLAIMED DEVICE DGC CX700WDR5
/dev/dsk/c3t0d4 /dev/rdsk/c3t0d4
root@(hcidb2) [/home/x40894]
# ioscan -P health -C tgtpath
Class I H/W Path health
==================================
tgtpath 3 0/2/1/0/4/0.0x500601601060295c online
tgtpath 4 0/2/1/0/4/0.0x500601691060295c online
tgtpath 0 0/4/1/0.0x6573b92e5e604a8 online
tgtpath 8 0/6/1/0/4/0.0x5006016544600644 online
tgtpath 7 0/6/1/0/4/0.0x5006016c44600644 online
tgtpath 2 0/7/1/0.0x1 online
tgtpath 1 0/7/1/1.0x2 online
-----------------------------------------------
So everything looks OK to me as far as HW and SAN mapping is concerned, and the LVM header info on both paths appears consistent.
Any thoughts?
BTW, this occurred after running a vgscan (after renaming /etc/lvmtab) in an attempt to clear up a problem with another VG where we had changed one of the FC cables over to a new SAN fabric but hadn't reduced the PVs from the VG prior to removing the old disk device files (so vgreduce would fail). That seemed to work except we somehow ran into a problem where 'Cur PV' and 'Act PV' were out of synch and we ended up rebuilding that VG from scratch.
--------------------------------------------------------------------------------------------------
P.S. This thread has been moved from System Administration to LVM and VxVM- Forum Moderator
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО08-05-2010 10:27 AM
тАО08-05-2010 10:27 AM
Re: 11.31 LVM/SAN disk error
vgextend vg
or since you are clearly out of sync
reboot the box.
Rita
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО08-05-2010 10:31 AM
тАО08-05-2010 10:31 AM
Re: 11.31 LVM/SAN disk error
If you reboot and have the same problem, and since you recreated the volume group from scratch (hence you are going to restore any data afterwards), then....
vgreduce -f vg (to force killing the volume group and try to get the o/s to get back in sync)
reboot may be optional here
Then try to recreate the volume group.
Just a thought,
Rita
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО08-05-2010 11:18 PM
тАО08-05-2010 11:18 PM
Re: 11.31 LVM/SAN disk error
cp /etc/lvmtab /etc/lvmtab.old(not needed, just in case)
vgscan -f /dev/vg01
why aren't you using persistent dsf's ?
you could try and sync adding persistent devices:
vgscan -N -f /dev/vg01
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО08-06-2010 12:32 AM
тАО08-06-2010 12:32 AM
Re: 11.31 LVM/SAN disk error
clariion 11.31 site:itrc.hp.com
should throw up plenty of reading material to consider
HTH
Duncan
I am an HPE Employee
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО08-06-2010 10:30 AM
тАО08-06-2010 10:30 AM
Re: 11.31 LVM/SAN disk error
The vgextend failed with the same error.
Daniel:
The vgscan failed with the same message (and it was a vgscan that created the situation in the first place). We didn't set it up with persistent dsf's because our SW vendor didn't want to support it. We plan to convert it over now anyway. According to the man page, running vgscan with the -N option on an activated VG will just add legacy dsf's.
I just realized it's not clear from my post that the VG is active and in use through the c5t0d4 path.
We won't have an opportunity to bring the app down until Aug 12. If some magic bullet hasn't been found by then I plan to try:
1). Deactivate and reactive the VG to see if that clears it up.
2). Deactivate the VG and try converting it to persistent dsf's.
3). Reboot.
Duncan:
The Google search you recommended yielded over 500 hits. I haven't parsed through them all. We have six other 11.31 systems connected to CX arrays though and I've studied the EMC documentation. We also have the second VG on this system which is working OK with SAN LUNs. What I've seen in the past with 11.31 and CX arrays when the array isn't configured for ALUA is you can access the raw device but not the block device, so you can do a pvcreate but can't add the LUN to a VG. I've never seen this situation where everything looks OK with the lunpaths but LVM still balks.
But you're suggestion is a good one.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО08-06-2010 07:13 PM
тАО08-06-2010 07:13 PM
Re: 11.31 LVM/SAN disk error
use the following:
please ensure vg01 should be active state
by using:
#vgdisplay -v vg01
#vgchange -a y vg01
then use
#vgextend /dev/vg01 /dev/dsk/c3t0d4
#vgdisplay -v vg01
check whether the disk is added or not
and if it showing still same error, then
#mv /etc/lvmtab /etc/lvmtab.old
#vgscan -v
#vgdisplay -v vg01
Regards
Deeos
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО08-07-2010 04:59 AM
тАО08-07-2010 04:59 AM
Re: 11.31 LVM/SAN disk error
scsimgr get_attr -a leg_mpath_enable
say?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО08-13-2010 10:08 AM
тАО08-13-2010 10:08 AM
Re: 11.31 LVM/SAN disk error
Then I de-activated it again and converted it to use persistent devices.