<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic which VA generating syslog POWERFAILED messages in Disk Enclosures</title>
    <link>https://community.hpe.com/t5/disk-enclosures/which-va-generating-syslog-powerfailed-messages/m-p/3334377#M12919</link>
    <description>I have two VA7400's and one of them is generating the following messages to my syslog.  How can I determine which VA is generating the message, please.&lt;BR /&gt;&lt;BR /&gt;Jul 17 13:48:18 erptest vmunix: LVM: Recovered Path (device 0x1f194600) to PV 1 in VG 7.&lt;BR /&gt;Jul 17 13:45:01 erptest : su : + tty?? root-oramaint&lt;BR /&gt;Jul 17 13:48:18 erptest  above message repeats 2 times&lt;BR /&gt;Jul 17 13:48:18 erptest vmunix: LVM: Recovered Path (device 0x1f195000) to PV 3 in VG 7.&lt;BR /&gt;Jul 17 13:48:32 erptest vmunix: LVM: vg[7]: pvnum=1 (dev_t=0x1f194600) is POWERFAILED&lt;BR /&gt;Jul 17 13:48:32 erptest vmunix: LVM: vg[7]: pvnum=3 (dev_t=0x1f195000) is POWERFAILED&lt;BR /&gt;Jul 17 13:48:55 erptest vmunix: LVM: Performed a switch for Lun ID = 0 (pv = 0x00000000439f0800), from raw device 0x1f195000 (with p&lt;BR /&gt;riority: 0, and current flags: 0xc0) to raw device 0x1f175000 (with priority: 1, and current flags: 0x0).&lt;BR /&gt;Jul 17 13:48:56 erptest vmunix: LVM: Performed a switch for Lun ID = 0 (pv = 0x00000000439e4800), from raw device 0x1f194600 (with p&lt;BR /&gt;riority: 0, and current flags: 0xc0) to raw device 0x1f174600 (with priority: 1, and current flags: 0x0).&lt;BR /&gt;Jul 17 13:48:57 erptest vmunix: LVM: Restored PV 1 to VG 7.&lt;BR /&gt;Jul 17 13:48:57 erptest vmunix: LVM: Restored PV 3 to VG 7.&lt;BR /&gt;Jul 17 13:49:06 erptest vmunix: LVM: Recovered Path (device 0x1f194600) to PV 1 in VG 7.&lt;BR /&gt;Jul 17 13:49:06 erptest vmunix: LVM: Recovered Path (device 0x1f195000) to PV 3 in VG 7.&lt;BR /&gt;Jul 17 13:49:06 erptest vmunix: LVM: Performed a switch for Lun ID = 0 (pv = 0x00000000439f0800), from raw device 0x1f175000 (with p&lt;BR /&gt;riority: 1, and current flags: 0x0) to raw device 0x1f195000 (with priority: 0, and current flags: 0x80).&lt;BR /&gt;Jul 17 13:49:06 erptest vmunix: LVM: Performed a switch for Lun ID = 0 (pv = 0x00000000439e4800), from raw device 0x1f174600 (with p&lt;BR /&gt;riority: 1, and current flags: 0x0) to raw device 0x1f194600 (with priority: 0, and current flags: 0x0).&lt;BR /&gt;Jul 17 13:50:01 erptest : su : + tty?? root-oramaint&lt;BR /&gt;Jul 17 13:55:00 erptest : su : + tty?? root-oramaint&lt;BR /&gt;Jul 17 14:10:00 erptest : su : + tty?? root-oramaint&lt;BR /&gt;Jul 17 14:11:57 erptest  above message repeats 4 times&lt;BR /&gt;Jul 17 14:15:00 erptest : su : + tty?? root-oramaint&lt;BR /&gt;Jul 17 14:17:16 erptest vmunix: LVM: Recovered Path (device 0x1f193400) to PV 1 in VG 5.&lt;BR /&gt;Jul 17 14:17:20 erptest vmunix: LVM: Restored PV 1 to VG 5.&lt;BR /&gt;Jul 17 14:20:02 erptest : su : + tty?? root-oramaint&lt;BR /&gt;Jul 17 14:31:29 erptest named[651]: NSTATS 1090092689 1089649888 Unknown=8 A=2510 NS=1 SOA=1477 PTR=167 33=4080 ANY=4&lt;BR /&gt;Jul 17 14:31:29 erptest named[651]: XSTATS 1090092689 1089649888 RR=4879 RNXD=390 RFwdR=4514 RDupR=2 RFail=0 RFErr=0 RErr=0 RAXFR=0 &lt;BR /&gt;RLame=0 ROpts=0 SSysQ=106 SAns=3868 SFwdQ=4592 SDupQ=392 SErr=0 RQ=8247 RIQ=0 RFwdQ=4592 RDupQ=680 RTCP=7 SFwdR=4514 SFail=2 SFErr=0&lt;BR /&gt; SNaAns=2331 SNXD=276&lt;BR /&gt;Jul 17 14:30:01 erptest : su : + tty?? root-oramaint&lt;BR /&gt;Jul 17 14:31:57 erptest  above message repeats 2 times&lt;BR /&gt;Jul 17 14:35:01 erptest : su : + tty?? root-oramaint&lt;BR /&gt;Jul 17 14:35:03 erptest monitor.sh: CHECK_POINT&lt;BR /&gt;Jul 17 14:40:00 erptest : su : + tty?? root-oramaint&lt;BR /&gt;Jul 17 14:48:53 erptest vmunix: LVM: Recovered Path (device 0x1f193600) to PV 3 in VG 5.&lt;BR /&gt;Jul 17 14:45:00 erptest : su : + tty?? root-oramaint&lt;BR /&gt;Jul 17 14:49:08 erptest vmunix: LVM: Restored PV 3 to VG 5.&lt;BR /&gt;Jul 17 14:50:00 erptest : su : + tty?? root-oramaint&lt;BR /&gt;Jul 17 14:55:00 erptest : su : + tty?? root-oramaint&lt;BR /&gt;Jul 17 15:10:00 erptest : su : + tty?? root-oramaint&lt;BR /&gt;Jul 17 15:11:57 erptest  above message repeats 4 times&lt;BR /&gt;Jul 17 15:15:00 erptest : su : + tty?? root-oramaint&lt;BR /&gt;# &lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;</description>
    <pubDate>Sat, 17 Jul 2004 15:12:55 GMT</pubDate>
    <dc:creator>pat hayes</dc:creator>
    <dc:date>2004-07-17T15:12:55Z</dc:date>
    <item>
      <title>which VA generating syslog POWERFAILED messages</title>
      <link>https://community.hpe.com/t5/disk-enclosures/which-va-generating-syslog-powerfailed-messages/m-p/3334377#M12919</link>
      <description>I have two VA7400's and one of them is generating the following messages to my syslog.  How can I determine which VA is generating the message, please.&lt;BR /&gt;&lt;BR /&gt;Jul 17 13:48:18 erptest vmunix: LVM: Recovered Path (device 0x1f194600) to PV 1 in VG 7.&lt;BR /&gt;Jul 17 13:45:01 erptest : su : + tty?? root-oramaint&lt;BR /&gt;Jul 17 13:48:18 erptest  above message repeats 2 times&lt;BR /&gt;Jul 17 13:48:18 erptest vmunix: LVM: Recovered Path (device 0x1f195000) to PV 3 in VG 7.&lt;BR /&gt;Jul 17 13:48:32 erptest vmunix: LVM: vg[7]: pvnum=1 (dev_t=0x1f194600) is POWERFAILED&lt;BR /&gt;Jul 17 13:48:32 erptest vmunix: LVM: vg[7]: pvnum=3 (dev_t=0x1f195000) is POWERFAILED&lt;BR /&gt;Jul 17 13:48:55 erptest vmunix: LVM: Performed a switch for Lun ID = 0 (pv = 0x00000000439f0800), from raw device 0x1f195000 (with p&lt;BR /&gt;riority: 0, and current flags: 0xc0) to raw device 0x1f175000 (with priority: 1, and current flags: 0x0).&lt;BR /&gt;Jul 17 13:48:56 erptest vmunix: LVM: Performed a switch for Lun ID = 0 (pv = 0x00000000439e4800), from raw device 0x1f194600 (with p&lt;BR /&gt;riority: 0, and current flags: 0xc0) to raw device 0x1f174600 (with priority: 1, and current flags: 0x0).&lt;BR /&gt;Jul 17 13:48:57 erptest vmunix: LVM: Restored PV 1 to VG 7.&lt;BR /&gt;Jul 17 13:48:57 erptest vmunix: LVM: Restored PV 3 to VG 7.&lt;BR /&gt;Jul 17 13:49:06 erptest vmunix: LVM: Recovered Path (device 0x1f194600) to PV 1 in VG 7.&lt;BR /&gt;Jul 17 13:49:06 erptest vmunix: LVM: Recovered Path (device 0x1f195000) to PV 3 in VG 7.&lt;BR /&gt;Jul 17 13:49:06 erptest vmunix: LVM: Performed a switch for Lun ID = 0 (pv = 0x00000000439f0800), from raw device 0x1f175000 (with p&lt;BR /&gt;riority: 1, and current flags: 0x0) to raw device 0x1f195000 (with priority: 0, and current flags: 0x80).&lt;BR /&gt;Jul 17 13:49:06 erptest vmunix: LVM: Performed a switch for Lun ID = 0 (pv = 0x00000000439e4800), from raw device 0x1f174600 (with p&lt;BR /&gt;riority: 1, and current flags: 0x0) to raw device 0x1f194600 (with priority: 0, and current flags: 0x0).&lt;BR /&gt;Jul 17 13:50:01 erptest : su : + tty?? root-oramaint&lt;BR /&gt;Jul 17 13:55:00 erptest : su : + tty?? root-oramaint&lt;BR /&gt;Jul 17 14:10:00 erptest : su : + tty?? root-oramaint&lt;BR /&gt;Jul 17 14:11:57 erptest  above message repeats 4 times&lt;BR /&gt;Jul 17 14:15:00 erptest : su : + tty?? root-oramaint&lt;BR /&gt;Jul 17 14:17:16 erptest vmunix: LVM: Recovered Path (device 0x1f193400) to PV 1 in VG 5.&lt;BR /&gt;Jul 17 14:17:20 erptest vmunix: LVM: Restored PV 1 to VG 5.&lt;BR /&gt;Jul 17 14:20:02 erptest : su : + tty?? root-oramaint&lt;BR /&gt;Jul 17 14:31:29 erptest named[651]: NSTATS 1090092689 1089649888 Unknown=8 A=2510 NS=1 SOA=1477 PTR=167 33=4080 ANY=4&lt;BR /&gt;Jul 17 14:31:29 erptest named[651]: XSTATS 1090092689 1089649888 RR=4879 RNXD=390 RFwdR=4514 RDupR=2 RFail=0 RFErr=0 RErr=0 RAXFR=0 &lt;BR /&gt;RLame=0 ROpts=0 SSysQ=106 SAns=3868 SFwdQ=4592 SDupQ=392 SErr=0 RQ=8247 RIQ=0 RFwdQ=4592 RDupQ=680 RTCP=7 SFwdR=4514 SFail=2 SFErr=0&lt;BR /&gt; SNaAns=2331 SNXD=276&lt;BR /&gt;Jul 17 14:30:01 erptest : su : + tty?? root-oramaint&lt;BR /&gt;Jul 17 14:31:57 erptest  above message repeats 2 times&lt;BR /&gt;Jul 17 14:35:01 erptest : su : + tty?? root-oramaint&lt;BR /&gt;Jul 17 14:35:03 erptest monitor.sh: CHECK_POINT&lt;BR /&gt;Jul 17 14:40:00 erptest : su : + tty?? root-oramaint&lt;BR /&gt;Jul 17 14:48:53 erptest vmunix: LVM: Recovered Path (device 0x1f193600) to PV 3 in VG 5.&lt;BR /&gt;Jul 17 14:45:00 erptest : su : + tty?? root-oramaint&lt;BR /&gt;Jul 17 14:49:08 erptest vmunix: LVM: Restored PV 3 to VG 5.&lt;BR /&gt;Jul 17 14:50:00 erptest : su : + tty?? root-oramaint&lt;BR /&gt;Jul 17 14:55:00 erptest : su : + tty?? root-oramaint&lt;BR /&gt;Jul 17 15:10:00 erptest : su : + tty?? root-oramaint&lt;BR /&gt;Jul 17 15:11:57 erptest  above message repeats 4 times&lt;BR /&gt;Jul 17 15:15:00 erptest : su : + tty?? root-oramaint&lt;BR /&gt;# &lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Sat, 17 Jul 2004 15:12:55 GMT</pubDate>
      <guid>https://community.hpe.com/t5/disk-enclosures/which-va-generating-syslog-powerfailed-messages/m-p/3334377#M12919</guid>
      <dc:creator>pat hayes</dc:creator>
      <dc:date>2004-07-17T15:12:55Z</dc:date>
    </item>
    <item>
      <title>Re: which VA generating syslog POWERFAILED messages</title>
      <link>https://community.hpe.com/t5/disk-enclosures/which-va-generating-syslog-powerfailed-messages/m-p/3334378#M12920</link>
      <description>Pat,&lt;BR /&gt;&lt;BR /&gt;how do you know it is only one of the VAs?&lt;BR /&gt;&lt;BR /&gt;You can decipher the device files:&lt;BR /&gt;&lt;BR /&gt;the message says&lt;BR /&gt;device 0x1f195000 is PV3 in VG 7&lt;BR /&gt;so find the vg with a group file with minor number 7:&lt;BR /&gt;ll /dev/vg*/group | grep 0x070000&lt;BR /&gt;&lt;BR /&gt;then see what is the third disk in that vg.&lt;BR /&gt;strings /etc/lvmtab | more&lt;BR /&gt;&lt;BR /&gt;that device should match 0x1f195000&lt;BR /&gt;&lt;BR /&gt;Anyway, for the VA and other disk arrays you should set the timeout value for LVM disks higher than the default, i.e. e.g. for LUN 5 run pvchange -t 180 /dev/dsk/c#t0d5&lt;BR /&gt;use this command for all VA disk devices.&lt;BR /&gt;&lt;BR /&gt;Regards,&lt;BR /&gt;Bernhard</description>
      <pubDate>Mon, 19 Jul 2004 06:43:44 GMT</pubDate>
      <guid>https://community.hpe.com/t5/disk-enclosures/which-va-generating-syslog-powerfailed-messages/m-p/3334378#M12920</guid>
      <dc:creator>Bernhard Mueller</dc:creator>
      <dc:date>2004-07-19T06:43:44Z</dc:date>
    </item>
  </channel>
</rss>

