Disk Enclosures
1752805 Members
5793 Online
108789 Solutions
New Discussion юеВ

POWERFAILED messages and EMC Clariion array

 
Son T. Cao
Occasional Contributor

POWERFAILED messages and EMC Clariion array

We have a couple of Clariion CX600 arrays that have predominantly HP servers connected. We dual path everything. When one path fails, I expect to see a "lun switch" message in syslog and nothing more. However, I'm seeing not only the "lun switch" but also "POWERFAILED" messages. Five seconds later, the alternate path is invoked. POWERFAILs only appear when BOTH paths have failed, correct? If I'm dual pathed, I shouldn't see this until the second leg is dropped. Has anyone else experienced this with Clariion arrays?
4 REPLIES 4
Michael Tully
Honored Contributor

Re: POWERFAILED messages and EMC Clariion array

'POWERFAILED' messages can also be generated from the 'timeout' value not set high enough. When you failed over the timeout may not be sufficiently high enough to cope. The suggested timeout value for logical volumes on EMC devices is 180 seconds. You can change the values easy enough with 'lvchange' or 'pvchange' See the man pages.
Anyone for a Mutiny ?
Tim Adamson_1
Honored Contributor

Re: POWERFAILED messages and EMC Clariion array

Increase the pv_timeout for each of the luns to 180.

I would also advise being relatively up-to-date on your LVM, SCSI, FC, VxFS patching too.

If you still have problems or have already tried the suggestions, can you post the complete message you see in syslog.log?


Tim
Yesterday is history, tomorrow is a mystery, today is a gift. That's why it's called the present.
Son T. Cao
Occasional Contributor

Re: POWERFAILED messages and EMC Clariion array

Thanks for the posts. Yes, I've followed the EMC support recommendation for setting PV timeout to 180. The issues persist. Perhaps it's behaving as designed and I'm misinterpreting it. Can anyone confirm that if a path fails in a PV link scenario, will I always see a POWERFAILED? Even if the other leg is up?

For reference, here is a snippet of the log:

Aug 21 10:33:03 camlmcd1 vmunix: LVM: Performed a switch for Lun ID = 0 (pv = 0x000000004afde800), from raw device 0x1f105000 (with priority: 0, and current flags: 0x40) to raw device 0x1f0f5000 (with priority: 1, and current flags: 0x0).
Aug 21 10:33:03 camlmcd1 vmunix:
Aug 21 10:33:03 camlmcd1 vmunix: SCSI: Read error -- dev: b 31 0x105000, errno: 126, resid: 2048,
Aug 21 10:33:03 camlmcd1 vmunix: blkno: 8, sectno: 16, offset: 8192, bcount: 2048.
Aug 21 10:33:30 camlmcd1 vmunix: LVM: Path (device 0x1f105100) to PV 1 in VG 3 Failed!
Aug 21 10:33:30 camlmcd1 vmunix: LVM: vg[3]: pvnum=0 (dev_t=0x1f0f5000) is POWERFAILED
Aug 21 10:33:30 camlmcd1 vmunix: LVM: vg[3]: pvnum=1 (dev_t=0x1f0f5100) is POWERFAILED
Aug 21 10:33:35 camlmcd1 vmunix: LVM: Recovered Path (device 0x1f0f5000) to PV 0 in VG 3.
Aug 21 10:33:35 camlmcd1 vmunix: LVM: Restored PV 0 to VG 3.
Aug 21 10:33:30 camlmcd1 vmunix: LVM: vg[3]: pvnum=0 (dev_t=0x1f0f5000) is POWERFAILED
Adriaan Mosseveld_2
New Member

Re: POWERFAILED messages and EMC Clariion array

This must be the week for Clariion errors - we have virtually the same problem.

When you say the connections are dual pathed, does this mean that the primary path for all of your luns is over the single connection, with the other only being used as the alternative.

We were receiving the POWERFAILED messages, because our primary lun paths are split over the 2 connections to improve throughput. If one goes down, we do not lose connectivity, as the alternate path is still OK, but because the primary path is down on some of the luns,we get the messages (or so I believe).