- Community Home
- >
- Storage
- >
- Entry Storage Systems
- >
- Disk Enclosures
- >
- Re: vmunix: LVM: vg[7]: pvnum=2 (dev_t=0x1f052000)...
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Discussions
Discussions
Forums
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО01-02-2006 02:09 PM
тАО01-02-2006 02:09 PM
vmunix: LVM: vg[7]: pvnum=2 (dev_t=0x1f052000) is POWERFAILED
Please help me to resolve this problem !
Many thanks.
QV
This is content of syslog file.
====================================
Jan 3 09:52:10 COVIS1 vmunix: SCSI: Read error -- dev: b 31 0x032500, errno: 126, resid: 2048,
Jan 3 09:52:10 COVIS1 vmunix: blkno: 8, sectno: 16, offset: 8192, bcount: 2048.
Jan 3 09:52:10 COVIS1 vmunix: LVM: vg[8]: pvnum=0 (dev_t=0x1f032500) is POWERFAILED
Jan 3 09:52:10 COVIS1 vmunix: SCSI: Read error -- dev: b 31 0x032500, errno: 126, resid: 8192,
Jan 3 09:52:10 COVIS1 vmunix: blkno: 20959024, sectno: 41918048, offset: -12795904, bcount: 8192.
Jan 3 09:52:10 COVIS1 vmunix: blkno: 2750648, sectno: 5501296, offset: -1478303744, bcount: 8192.
Jan 3 09:52:10 COVIS1 vmunix: blkno: 1594848, sectno: 3189696, offset: 1633124352, bcount: 8192.
Jan 3 09:52:10 COVIS1 vmunix: blkno: 1274040, sectno: 2548080, offset: 1304616960, bcount: 8192.
Jan 3 09:52:10 COVIS1 vmunix: blkno: 697256, sectno: 1394512, offset: 713990144, bcount: 8192.
Jan 3 09:52:10 COVIS1 vmunix: blkno: 720608, sectno: 1441216, offset: 737902592, bcount: 8192.
Jan 3 09:52:10 COVIS1 vmunix: blkno: 628592, sectno: 1257184, offset: 643678208, bcount: 8192.
Jan 3 09:52:10 COVIS1 vmunix: blkno: 538960, sectno: 1077920, offset: 551895040, bcount: 8192.
Jan 3 09:52:10 COVIS1 vmunix: blkno: 539168, sectno: 1078336, offset: 552108032, bcount: 8192.
Jan 3 09:52:10 COVIS1 vmunix: LVM: Performed a switch for Lun ID = 0 (pv = 0x000000004a3e9800), from raw device 0x1f032500 (with pr
iority: 0, and current flags: 0xc0) to raw device 0x1f052500 (with priority: 1, and current flags: 0x0).
Jan 3 09:52:10 COVIS1 vmunix:
Jan 3 09:52:10 COVIS1 above message repeats 9 times
Jan 3 09:52:10 COVIS1 vmunix: SCSI: Read error -- dev: b 31 0x032500, errno: 126, resid: 8192,
Jan 3 09:52:10 COVIS1 above message repeats 8 times
Jan 3 09:52:12 COVIS1 vmunix: LVM: Performed a switch for Lun ID = 0 (pv = 0x000000004103c800), from raw device 0x1f032600 (with pr
iority: 0, and current flags: 0x40) to raw device 0x1f052600 (with priority: 1, and current flags: 0x0).
Jan 3 09:52:12 COVIS1 vmunix: LVM: vg[10]: pvnum=0 (dev_t=0x1f052600) is POWERFAILED
Jan 3 09:52:13 COVIS1 vmunix: LVM: vg[6]: pvnum=2 (dev_t=0x1f051700) is POWERFAILED
Jan 3 09:52:14 COVIS1 vmunix: LVM: Restored PV 0 to VG 10.
Jan 3 09:52:19 COVIS1 vmunix: LVM: Performed a switch for Lun ID = 0 (pv = 0x000000004a2d3000), from raw device 0x1f032000 (with pr
iority: 0, and current flags: 0x40) to raw device 0x1f052000 (with priority: 1, and current flags: 0x0).
Jan 3 09:52:19 COVIS1 vmunix:
Jan 3 09:52:19 COVIS1 vmunix: SCSI: Read error -- dev: b 31 0x032000, errno: 126, resid: 2048,
Jan 3 09:52:19 COVIS1 vmunix: blkno: 8, sectno: 16, offset: 8192, bcount: 2048.
Jan 3 09:52:22 COVIS1 vmunix: LVM: vg[7]: pvnum=2 (dev_t=0x1f052000) is POWERFAILED
Jan 3 09:52:27 COVIS1 vmunix: LVM: Recovered Path (device 0x1f031700) to PV 2 in VG 6.
Jan 3 09:52:27 COVIS1 vmunix: LVM: Recovered Path (device 0x1f052000) to PV 2 in VG 7.
Jan 3 09:52:27 COVIS1 vmunix: LVM: Recovered Path (device 0x1f032000) to PV 2 in VG 7.
Jan 3 09:52:22 COVIS1 vmunix: LVM: vg[7]: pvnum=2 (dev_t=0x1f052000) is POWERFAILED
Jan 3 09:52:27 COVIS1 vmunix: LVM: Performed a switch for Lun ID = 0 (pv = 0x000000004a2d3000), from raw device 0x1f052000 (with pr
iority: 1, and current flags: 0x80) to raw device 0x1f032000 (with priority: 0, and current flags: 0x80).
Jan 3 09:52:27 COVIS1 vmunix: LVM: Recovered Path (device 0x1f052500) to PV 0 in VG 8.
Jan 3 09:52:27 COVIS1 vmunix: LVM: Recovered Path (device 0x1f032500) to PV 0 in VG 8.
Jan 3 09:52:27 COVIS1 vmunix: LVM: Performed a switch for Lun ID = 0 (pv = 0x000000004a3e9800), from raw device 0x1f052500 (with pr
iority: 1, and current flags: 0x0) to raw device 0x1f032500 (with priority: 0, and current flags: 0x80).
Jan 3 09:52:27 COVIS1 vmunix: LVM: Restored PV 2 to VG 7.
Jan 3 09:52:27 COVIS1 vmunix: LVM: Restored PV 0 to VG 8.
Jan 3 09:52:28 COVIS1 vmunix: LVM: Recovered Path (device 0x1f032600) to PV 0 in VG 10.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО01-02-2006 02:16 PM
тАО01-02-2006 02:16 PM
Re: vmunix: LVM: vg[7]: pvnum=2 (dev_t=0x1f052000) is POWERFAILED
What storage did you actually used for these VG's?
If you're using FC for connection to the storage, you may want to install patches for FC first.
These error could happen to if your FC-card have hardware problems, just remember you can't let these error too long cause it can cause data corruption
regards,
Sandy
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО01-02-2006 02:33 PM
тАО01-02-2006 02:33 PM
Re: vmunix: LVM: vg[7]: pvnum=2 (dev_t=0x1f052000) is POWERFAILED
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО01-02-2006 03:37 PM
тАО01-02-2006 03:37 PM
Re: vmunix: LVM: vg[7]: pvnum=2 (dev_t=0x1f052000) is POWERFAILED
My suggestion is to install the FC patch first,
You can search patch set for FibreChannel on patch database
regards,
Sandy
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО01-02-2006 04:45 PM
тАО01-02-2006 04:45 PM
Re: vmunix: LVM: vg[7]: pvnum=2 (dev_t=0x1f052000) is POWERFAILED
Happy New Year
Please confirm, whether this setup was working fine earlier?? or from starting itself this problem persists.??
It is clear that there is some connectivity issue with the storage attached.
Have you configured the storage box port for os=hp ux ?? This will solve the time out issues, which may occur.
Are You using a Fibre Channel Switch or SAN hub?? which is the model?? or is directly attached to server?? .. Please post the output of "#fcmsutil /dev/tdx
Also check the fibre cable for any visible damage???
Do You have an alternate link from server to the storage..?? ( will get from the "#vgdisplay |grep alternate"
Also try to put this problem to hpux - lvm area in forum, so that you will get more solutions..
shameer
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО01-02-2006 09:07 PM
тАО01-02-2006 09:07 PM
Re: vmunix: LVM: vg[7]: pvnum=2 (dev_t=0x1f052000) is POWERFAILED
you should change the pv timeout value in LVM for all the VA LUNs to 60 or 90 seconds.
e.g. pvchange -t 60 /dev/dsk/c5t2d5
verify with pvdisplay /dev/dsk/c5t2d5
this should settle your issue. Note it took only 15-20 seconds until LVM recovered the path, the VA was most likely just busy satifying a higher priority request.
You may also bump up your scsi_max_qdepth from 8 to 16 or 32. kmtune -q scsi_max_qdepth will show you the current value. Search the forum for what needs to be taken into consideration wrt changing it...
Regards,
Bernhard